00:00:00.001 Started by upstream project "autotest-per-patch" build number 131127 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.082 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.851 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.864 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.876 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:03.876 > git config core.sparsecheckout # timeout=10 00:00:03.886 > git read-tree -mu HEAD # timeout=10 00:00:03.904 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:03.925 Commit message: "scripts/kid: add issue 3551" 00:00:03.925 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:04.011 [Pipeline] Start of Pipeline 00:00:04.024 [Pipeline] library 00:00:04.026 Loading library shm_lib@master 00:00:04.026 Library shm_lib@master is cached. Copying from home. 00:00:04.046 [Pipeline] node 00:00:04.055 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.056 [Pipeline] { 00:00:04.067 [Pipeline] catchError 00:00:04.068 [Pipeline] { 00:00:04.080 [Pipeline] wrap 00:00:04.089 [Pipeline] { 00:00:04.099 [Pipeline] stage 00:00:04.101 [Pipeline] { (Prologue) 00:00:04.305 [Pipeline] sh 00:00:04.590 + logger -p user.info -t JENKINS-CI 00:00:04.605 [Pipeline] echo 00:00:04.606 Node: WFP6 00:00:04.614 [Pipeline] sh 00:00:04.912 [Pipeline] setCustomBuildProperty 00:00:04.934 [Pipeline] echo 00:00:04.936 Cleanup processes 00:00:04.941 [Pipeline] sh 00:00:05.226 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.226 269438 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.237 [Pipeline] sh 00:00:05.519 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.520 ++ grep -v 'sudo pgrep' 00:00:05.520 ++ awk '{print $1}' 00:00:05.520 + sudo kill -9 00:00:05.520 + true 00:00:05.533 [Pipeline] cleanWs 00:00:05.544 [WS-CLEANUP] Deleting project workspace... 00:00:05.544 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.551 [WS-CLEANUP] done 00:00:05.555 [Pipeline] setCustomBuildProperty 00:00:05.569 [Pipeline] sh 00:00:05.853 + sudo git config --global --replace-all safe.directory '*' 00:00:05.952 [Pipeline] httpRequest 00:00:06.364 [Pipeline] echo 00:00:06.366 Sorcerer 10.211.164.101 is alive 00:00:06.376 [Pipeline] retry 00:00:06.378 [Pipeline] { 00:00:06.392 [Pipeline] httpRequest 00:00:06.396 HttpMethod: GET 00:00:06.396 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:06.397 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:06.411 Response Code: HTTP/1.1 200 OK 00:00:06.411 Success: Status code 200 is in the accepted range: 200,404 00:00:06.412 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:15.180 [Pipeline] } 00:00:15.199 [Pipeline] // retry 00:00:15.206 [Pipeline] sh 00:00:15.492 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:15.508 [Pipeline] httpRequest 00:00:15.902 [Pipeline] echo 00:00:15.904 Sorcerer 10.211.164.101 is alive 00:00:15.913 [Pipeline] retry 00:00:15.915 [Pipeline] { 00:00:15.928 [Pipeline] httpRequest 00:00:15.933 HttpMethod: GET 00:00:15.933 URL: http://10.211.164.101/packages/spdk_d6f411c3e161088368874d48969d641cb39f445b.tar.gz 00:00:15.934 Sending request to url: http://10.211.164.101/packages/spdk_d6f411c3e161088368874d48969d641cb39f445b.tar.gz 00:00:15.944 Response Code: HTTP/1.1 200 OK 00:00:15.945 Success: Status code 200 is in the accepted range: 200,404 00:00:15.945 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d6f411c3e161088368874d48969d641cb39f445b.tar.gz 00:02:38.489 [Pipeline] } 00:02:38.507 [Pipeline] // retry 00:02:38.520 [Pipeline] sh 00:02:38.806 + tar --no-same-owner -xf spdk_d6f411c3e161088368874d48969d641cb39f445b.tar.gz 00:02:41.356 [Pipeline] sh 00:02:41.642 + git -C spdk log --oneline -n5 00:02:41.642 d6f411c3e util: handle events for vfio fd type 00:02:41.642 e3158d7d2 util: Extended options for spdk_fd_group_add 00:02:41.642 76e790f9a test/unit: add missing fd_group unit tests 00:02:41.642 02c0773db util/fd_group: improve logs and documentation 00:02:41.642 282b0f4a2 nvme: interface to retrieve fd for a queue 00:02:41.653 [Pipeline] } 00:02:41.667 [Pipeline] // stage 00:02:41.675 [Pipeline] stage 00:02:41.677 [Pipeline] { (Prepare) 00:02:41.694 [Pipeline] writeFile 00:02:41.709 [Pipeline] sh 00:02:41.994 + logger -p user.info -t JENKINS-CI 00:02:42.007 [Pipeline] sh 00:02:42.293 + logger -p user.info -t JENKINS-CI 00:02:42.305 [Pipeline] sh 00:02:42.637 + cat autorun-spdk.conf 00:02:42.637 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.637 SPDK_TEST_NVMF=1 00:02:42.637 SPDK_TEST_NVME_CLI=1 00:02:42.637 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:42.637 SPDK_TEST_NVMF_NICS=e810 00:02:42.637 SPDK_TEST_VFIOUSER=1 00:02:42.637 SPDK_RUN_UBSAN=1 00:02:42.637 NET_TYPE=phy 00:02:42.670 RUN_NIGHTLY=0 00:02:42.674 [Pipeline] readFile 00:02:42.699 [Pipeline] withEnv 00:02:42.702 [Pipeline] { 00:02:42.715 [Pipeline] sh 00:02:43.001 + set -ex 00:02:43.001 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:43.001 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:43.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.001 ++ SPDK_TEST_NVMF=1 00:02:43.001 ++ SPDK_TEST_NVME_CLI=1 00:02:43.001 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:43.001 ++ SPDK_TEST_NVMF_NICS=e810 00:02:43.001 ++ SPDK_TEST_VFIOUSER=1 00:02:43.001 ++ SPDK_RUN_UBSAN=1 00:02:43.001 ++ NET_TYPE=phy 00:02:43.001 ++ RUN_NIGHTLY=0 00:02:43.001 + case $SPDK_TEST_NVMF_NICS in 00:02:43.001 + DRIVERS=ice 00:02:43.001 + [[ tcp == \r\d\m\a ]] 00:02:43.001 + [[ -n ice ]] 00:02:43.001 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:43.001 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:43.001 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:43.001 rmmod: ERROR: Module irdma is not currently loaded 00:02:43.001 rmmod: ERROR: Module i40iw is not currently loaded 00:02:43.001 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:43.001 + true 00:02:43.001 + for D in $DRIVERS 00:02:43.001 + sudo modprobe ice 00:02:43.001 + exit 0 00:02:43.011 [Pipeline] } 00:02:43.026 [Pipeline] // withEnv 00:02:43.031 [Pipeline] } 00:02:43.045 [Pipeline] // stage 00:02:43.055 [Pipeline] catchError 00:02:43.057 [Pipeline] { 00:02:43.071 [Pipeline] timeout 00:02:43.071 Timeout set to expire in 1 hr 0 min 00:02:43.073 [Pipeline] { 00:02:43.086 [Pipeline] stage 00:02:43.088 [Pipeline] { (Tests) 00:02:43.101 [Pipeline] sh 00:02:43.387 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:43.388 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:43.388 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:43.388 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:43.388 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:43.388 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:43.388 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:43.388 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:43.388 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:43.388 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:43.388 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:43.388 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:43.388 + source /etc/os-release 00:02:43.388 ++ NAME='Fedora Linux' 00:02:43.388 ++ VERSION='39 (Cloud Edition)' 00:02:43.388 ++ ID=fedora 00:02:43.388 ++ VERSION_ID=39 00:02:43.388 ++ VERSION_CODENAME= 00:02:43.388 ++ PLATFORM_ID=platform:f39 00:02:43.388 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:43.388 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:43.388 ++ LOGO=fedora-logo-icon 00:02:43.388 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:43.388 ++ HOME_URL=https://fedoraproject.org/ 00:02:43.388 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:43.388 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:43.388 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:43.388 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:43.388 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:43.388 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:43.388 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:43.388 ++ SUPPORT_END=2024-11-12 00:02:43.388 ++ VARIANT='Cloud Edition' 00:02:43.388 ++ VARIANT_ID=cloud 00:02:43.388 + uname -a 00:02:43.388 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:43.388 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:45.926 Hugepages 00:02:45.926 node hugesize free / total 00:02:45.926 node0 1048576kB 0 / 0 00:02:45.926 node0 2048kB 0 / 0 00:02:45.926 node1 1048576kB 0 / 0 00:02:45.926 node1 2048kB 0 / 0 00:02:45.926 00:02:45.926 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.926 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:45.926 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:45.926 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:45.926 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:45.926 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:45.926 + rm -f /tmp/spdk-ld-path 00:02:45.926 + source autorun-spdk.conf 00:02:45.927 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.927 ++ SPDK_TEST_NVMF=1 00:02:45.927 ++ SPDK_TEST_NVME_CLI=1 00:02:45.927 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:45.927 ++ SPDK_TEST_NVMF_NICS=e810 00:02:45.927 ++ SPDK_TEST_VFIOUSER=1 00:02:45.927 ++ SPDK_RUN_UBSAN=1 00:02:45.927 ++ NET_TYPE=phy 00:02:45.927 ++ RUN_NIGHTLY=0 00:02:45.927 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:45.927 + [[ -n '' ]] 00:02:45.927 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:45.927 + for M in /var/spdk/build-*-manifest.txt 00:02:45.927 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:45.927 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:45.927 + for M in /var/spdk/build-*-manifest.txt 00:02:45.927 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:45.927 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:45.927 + for M in /var/spdk/build-*-manifest.txt 00:02:45.927 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:45.927 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:45.927 ++ uname 00:02:45.927 + [[ Linux == \L\i\n\u\x ]] 00:02:45.927 + sudo dmesg -T 00:02:46.187 + sudo dmesg --clear 00:02:46.187 + dmesg_pid=270874 00:02:46.187 + [[ Fedora Linux == FreeBSD ]] 00:02:46.187 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:46.187 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:46.187 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:46.187 + [[ -x /usr/src/fio-static/fio ]] 00:02:46.187 + export FIO_BIN=/usr/src/fio-static/fio 00:02:46.187 + FIO_BIN=/usr/src/fio-static/fio 00:02:46.187 + sudo dmesg -Tw 00:02:46.187 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:46.187 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:46.187 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:46.187 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:46.187 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:46.187 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:46.187 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:46.187 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:46.187 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:46.187 Test configuration: 00:02:46.187 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:46.187 SPDK_TEST_NVMF=1 00:02:46.187 SPDK_TEST_NVME_CLI=1 00:02:46.187 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:46.187 SPDK_TEST_NVMF_NICS=e810 00:02:46.187 SPDK_TEST_VFIOUSER=1 00:02:46.187 SPDK_RUN_UBSAN=1 00:02:46.187 NET_TYPE=phy 00:02:46.187 RUN_NIGHTLY=0 16:27:50 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:46.187 16:27:50 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:46.187 16:27:50 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:46.187 16:27:50 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:46.187 16:27:50 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.187 16:27:50 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.187 16:27:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.187 16:27:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.187 16:27:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.187 16:27:50 -- paths/export.sh@5 -- $ export PATH 00:02:46.187 16:27:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.187 16:27:50 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:46.187 16:27:50 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:46.187 16:27:50 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728916070.XXXXXX 00:02:46.187 16:27:50 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728916070.jalAFz 00:02:46.187 16:27:50 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:46.187 16:27:50 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:46.187 16:27:50 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:46.187 16:27:50 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:46.187 16:27:50 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:46.187 16:27:50 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:46.187 16:27:50 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:46.187 16:27:50 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.187 16:27:50 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:46.187 16:27:50 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:46.187 16:27:50 -- pm/common@17 -- $ local monitor 00:02:46.187 16:27:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.187 16:27:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.187 16:27:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.187 16:27:50 -- pm/common@21 -- $ date +%s 00:02:46.187 16:27:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.187 16:27:50 -- pm/common@21 -- $ date +%s 00:02:46.187 16:27:50 -- pm/common@25 -- $ sleep 1 00:02:46.187 16:27:50 -- pm/common@21 -- $ date +%s 00:02:46.187 16:27:50 -- pm/common@21 -- $ date +%s 00:02:46.187 16:27:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728916070 00:02:46.187 16:27:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728916070 00:02:46.187 16:27:50 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728916070 00:02:46.187 16:27:50 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728916070 00:02:46.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728916070_collect-vmstat.pm.log 00:02:46.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728916070_collect-cpu-load.pm.log 00:02:46.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728916070_collect-cpu-temp.pm.log 00:02:46.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728916070_collect-bmc-pm.bmc.pm.log 00:02:47.126 16:27:51 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:47.126 16:27:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:47.126 16:27:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:47.126 16:27:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:47.126 16:27:51 -- spdk/autobuild.sh@16 -- $ date -u 00:02:47.126 Mon Oct 14 02:27:51 PM UTC 2024 00:02:47.126 16:27:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:47.386 v25.01-pre-73-gd6f411c3e 00:02:47.387 16:27:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:47.387 16:27:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:47.387 16:27:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:47.387 16:27:51 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:47.387 16:27:51 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:47.387 16:27:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.387 ************************************ 00:02:47.387 START TEST ubsan 00:02:47.387 ************************************ 00:02:47.387 16:27:51 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:47.387 using ubsan 00:02:47.387 00:02:47.387 real 0m0.000s 00:02:47.387 user 0m0.000s 00:02:47.387 sys 0m0.000s 00:02:47.387 16:27:51 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:47.387 16:27:51 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:47.387 ************************************ 00:02:47.387 END TEST ubsan 00:02:47.387 ************************************ 00:02:47.387 16:27:51 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:47.387 16:27:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:47.387 16:27:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:47.387 16:27:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:47.387 16:27:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:47.387 16:27:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:47.387 16:27:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:47.387 16:27:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:47.387 16:27:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:47.387 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:47.387 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:47.956 Using 'verbs' RDMA provider 00:03:00.742 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:12.961 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:12.961 Creating mk/config.mk...done. 00:03:12.961 Creating mk/cc.flags.mk...done. 00:03:12.961 Type 'make' to build. 00:03:12.961 16:28:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:12.961 16:28:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:12.961 16:28:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:12.961 16:28:17 -- common/autotest_common.sh@10 -- $ set +x 00:03:12.961 ************************************ 00:03:12.961 START TEST make 00:03:12.961 ************************************ 00:03:12.961 16:28:17 make -- common/autotest_common.sh@1125 -- $ make -j96 00:03:13.220 make[1]: Nothing to be done for 'all'. 00:03:14.615 The Meson build system 00:03:14.615 Version: 1.5.0 00:03:14.615 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:14.615 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.615 Build type: native build 00:03:14.615 Project name: libvfio-user 00:03:14.615 Project version: 0.0.1 00:03:14.615 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:14.615 C linker for the host machine: cc ld.bfd 2.40-14 00:03:14.615 Host machine cpu family: x86_64 00:03:14.615 Host machine cpu: x86_64 00:03:14.615 Run-time dependency threads found: YES 00:03:14.615 Library dl found: YES 00:03:14.615 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:14.615 Run-time dependency json-c found: YES 0.17 00:03:14.615 Run-time dependency cmocka found: YES 1.1.7 00:03:14.615 Program pytest-3 found: NO 00:03:14.615 Program flake8 found: NO 00:03:14.615 Program misspell-fixer found: NO 00:03:14.615 Program restructuredtext-lint found: NO 00:03:14.615 Program valgrind found: YES (/usr/bin/valgrind) 00:03:14.615 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:14.615 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:14.615 Compiler for C supports arguments -Wwrite-strings: YES 00:03:14.615 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:14.615 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:14.615 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:14.615 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:14.615 Build targets in project: 8 00:03:14.615 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:14.615 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:14.615 00:03:14.615 libvfio-user 0.0.1 00:03:14.615 00:03:14.615 User defined options 00:03:14.615 buildtype : debug 00:03:14.615 default_library: shared 00:03:14.615 libdir : /usr/local/lib 00:03:14.615 00:03:14.615 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:15.181 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.181 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:15.181 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:15.181 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:15.181 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:15.181 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:15.181 [6/37] Compiling C object samples/null.p/null.c.o 00:03:15.181 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:15.181 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:15.181 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:15.181 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:15.181 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:15.181 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:15.181 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:15.181 [14/37] Compiling C object samples/server.p/server.c.o 00:03:15.181 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:15.181 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:15.181 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:15.181 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:15.181 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:15.181 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:15.181 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:15.181 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:15.181 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:15.181 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:15.181 [25/37] Compiling C object samples/client.p/client.c.o 00:03:15.181 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:15.181 [27/37] Linking target samples/client 00:03:15.439 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:15.439 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:15.439 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:15.439 [31/37] Linking target test/unit_tests 00:03:15.439 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:15.439 [33/37] Linking target samples/null 00:03:15.439 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:15.439 [35/37] Linking target samples/gpio-pci-idio-16 00:03:15.439 [36/37] Linking target samples/lspci 00:03:15.439 [37/37] Linking target samples/server 00:03:15.697 INFO: autodetecting backend as ninja 00:03:15.697 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.697 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:15.954 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:15.954 ninja: no work to do. 00:03:21.229 The Meson build system 00:03:21.229 Version: 1.5.0 00:03:21.229 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:21.229 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:21.229 Build type: native build 00:03:21.229 Program cat found: YES (/usr/bin/cat) 00:03:21.229 Project name: DPDK 00:03:21.229 Project version: 24.03.0 00:03:21.229 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:21.229 C linker for the host machine: cc ld.bfd 2.40-14 00:03:21.229 Host machine cpu family: x86_64 00:03:21.229 Host machine cpu: x86_64 00:03:21.229 Message: ## Building in Developer Mode ## 00:03:21.229 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:21.229 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:21.229 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:21.229 Program python3 found: YES (/usr/bin/python3) 00:03:21.229 Program cat found: YES (/usr/bin/cat) 00:03:21.229 Compiler for C supports arguments -march=native: YES 00:03:21.229 Checking for size of "void *" : 8 00:03:21.229 Checking for size of "void *" : 8 (cached) 00:03:21.229 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:21.229 Library m found: YES 00:03:21.229 Library numa found: YES 00:03:21.229 Has header "numaif.h" : YES 00:03:21.229 Library fdt found: NO 00:03:21.229 Library execinfo found: NO 00:03:21.229 Has header "execinfo.h" : YES 00:03:21.229 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:21.229 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:21.229 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:21.229 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:21.229 Run-time dependency openssl found: YES 3.1.1 00:03:21.229 Run-time dependency libpcap found: YES 1.10.4 00:03:21.229 Has header "pcap.h" with dependency libpcap: YES 00:03:21.229 Compiler for C supports arguments -Wcast-qual: YES 00:03:21.229 Compiler for C supports arguments -Wdeprecated: YES 00:03:21.229 Compiler for C supports arguments -Wformat: YES 00:03:21.229 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:21.229 Compiler for C supports arguments -Wformat-security: NO 00:03:21.229 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:21.229 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:21.229 Compiler for C supports arguments -Wnested-externs: YES 00:03:21.229 Compiler for C supports arguments -Wold-style-definition: YES 00:03:21.229 Compiler for C supports arguments -Wpointer-arith: YES 00:03:21.229 Compiler for C supports arguments -Wsign-compare: YES 00:03:21.229 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:21.229 Compiler for C supports arguments -Wundef: YES 00:03:21.229 Compiler for C supports arguments -Wwrite-strings: YES 00:03:21.229 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:21.229 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:21.229 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:21.229 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:21.229 Program objdump found: YES (/usr/bin/objdump) 00:03:21.229 Compiler for C supports arguments -mavx512f: YES 00:03:21.229 Checking if "AVX512 checking" compiles: YES 00:03:21.229 Fetching value of define "__SSE4_2__" : 1 00:03:21.229 Fetching value of define "__AES__" : 1 00:03:21.229 Fetching value of define "__AVX__" : 1 00:03:21.229 Fetching value of define "__AVX2__" : 1 00:03:21.229 Fetching value of define "__AVX512BW__" : 1 00:03:21.229 Fetching value of define "__AVX512CD__" : 1 00:03:21.229 Fetching value of define "__AVX512DQ__" : 1 00:03:21.229 Fetching value of define "__AVX512F__" : 1 00:03:21.229 Fetching value of define "__AVX512VL__" : 1 00:03:21.229 Fetching value of define "__PCLMUL__" : 1 00:03:21.229 Fetching value of define "__RDRND__" : 1 00:03:21.229 Fetching value of define "__RDSEED__" : 1 00:03:21.229 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:21.229 Fetching value of define "__znver1__" : (undefined) 00:03:21.229 Fetching value of define "__znver2__" : (undefined) 00:03:21.229 Fetching value of define "__znver3__" : (undefined) 00:03:21.229 Fetching value of define "__znver4__" : (undefined) 00:03:21.229 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:21.229 Message: lib/log: Defining dependency "log" 00:03:21.229 Message: lib/kvargs: Defining dependency "kvargs" 00:03:21.229 Message: lib/telemetry: Defining dependency "telemetry" 00:03:21.229 Checking for function "getentropy" : NO 00:03:21.229 Message: lib/eal: Defining dependency "eal" 00:03:21.229 Message: lib/ring: Defining dependency "ring" 00:03:21.229 Message: lib/rcu: Defining dependency "rcu" 00:03:21.229 Message: lib/mempool: Defining dependency "mempool" 00:03:21.229 Message: lib/mbuf: Defining dependency "mbuf" 00:03:21.229 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:21.229 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:21.229 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:21.229 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:21.229 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:21.229 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:21.229 Compiler for C supports arguments -mpclmul: YES 00:03:21.229 Compiler for C supports arguments -maes: YES 00:03:21.229 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:21.229 Compiler for C supports arguments -mavx512bw: YES 00:03:21.229 Compiler for C supports arguments -mavx512dq: YES 00:03:21.229 Compiler for C supports arguments -mavx512vl: YES 00:03:21.229 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:21.229 Compiler for C supports arguments -mavx2: YES 00:03:21.229 Compiler for C supports arguments -mavx: YES 00:03:21.229 Message: lib/net: Defining dependency "net" 00:03:21.229 Message: lib/meter: Defining dependency "meter" 00:03:21.229 Message: lib/ethdev: Defining dependency "ethdev" 00:03:21.229 Message: lib/pci: Defining dependency "pci" 00:03:21.229 Message: lib/cmdline: Defining dependency "cmdline" 00:03:21.229 Message: lib/hash: Defining dependency "hash" 00:03:21.229 Message: lib/timer: Defining dependency "timer" 00:03:21.229 Message: lib/compressdev: Defining dependency "compressdev" 00:03:21.229 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:21.229 Message: lib/dmadev: Defining dependency "dmadev" 00:03:21.229 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:21.229 Message: lib/power: Defining dependency "power" 00:03:21.229 Message: lib/reorder: Defining dependency "reorder" 00:03:21.229 Message: lib/security: Defining dependency "security" 00:03:21.229 Has header "linux/userfaultfd.h" : YES 00:03:21.229 Has header "linux/vduse.h" : YES 00:03:21.229 Message: lib/vhost: Defining dependency "vhost" 00:03:21.229 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:21.229 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:21.229 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:21.229 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:21.230 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:21.230 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:21.230 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:21.230 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:21.230 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:21.230 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:21.230 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:21.230 Configuring doxy-api-html.conf using configuration 00:03:21.230 Configuring doxy-api-man.conf using configuration 00:03:21.230 Program mandb found: YES (/usr/bin/mandb) 00:03:21.230 Program sphinx-build found: NO 00:03:21.230 Configuring rte_build_config.h using configuration 00:03:21.230 Message: 00:03:21.230 ================= 00:03:21.230 Applications Enabled 00:03:21.230 ================= 00:03:21.230 00:03:21.230 apps: 00:03:21.230 00:03:21.230 00:03:21.230 Message: 00:03:21.230 ================= 00:03:21.230 Libraries Enabled 00:03:21.230 ================= 00:03:21.230 00:03:21.230 libs: 00:03:21.230 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:21.230 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:21.230 cryptodev, dmadev, power, reorder, security, vhost, 00:03:21.230 00:03:21.230 Message: 00:03:21.230 =============== 00:03:21.230 Drivers Enabled 00:03:21.230 =============== 00:03:21.230 00:03:21.230 common: 00:03:21.230 00:03:21.230 bus: 00:03:21.230 pci, vdev, 00:03:21.230 mempool: 00:03:21.230 ring, 00:03:21.230 dma: 00:03:21.230 00:03:21.230 net: 00:03:21.230 00:03:21.230 crypto: 00:03:21.230 00:03:21.230 compress: 00:03:21.230 00:03:21.230 vdpa: 00:03:21.230 00:03:21.230 00:03:21.230 Message: 00:03:21.230 ================= 00:03:21.230 Content Skipped 00:03:21.230 ================= 00:03:21.230 00:03:21.230 apps: 00:03:21.230 dumpcap: explicitly disabled via build config 00:03:21.230 graph: explicitly disabled via build config 00:03:21.230 pdump: explicitly disabled via build config 00:03:21.230 proc-info: explicitly disabled via build config 00:03:21.230 test-acl: explicitly disabled via build config 00:03:21.230 test-bbdev: explicitly disabled via build config 00:03:21.230 test-cmdline: explicitly disabled via build config 00:03:21.230 test-compress-perf: explicitly disabled via build config 00:03:21.230 test-crypto-perf: explicitly disabled via build config 00:03:21.230 test-dma-perf: explicitly disabled via build config 00:03:21.230 test-eventdev: explicitly disabled via build config 00:03:21.230 test-fib: explicitly disabled via build config 00:03:21.230 test-flow-perf: explicitly disabled via build config 00:03:21.230 test-gpudev: explicitly disabled via build config 00:03:21.230 test-mldev: explicitly disabled via build config 00:03:21.230 test-pipeline: explicitly disabled via build config 00:03:21.230 test-pmd: explicitly disabled via build config 00:03:21.230 test-regex: explicitly disabled via build config 00:03:21.230 test-sad: explicitly disabled via build config 00:03:21.230 test-security-perf: explicitly disabled via build config 00:03:21.230 00:03:21.230 libs: 00:03:21.230 argparse: explicitly disabled via build config 00:03:21.230 metrics: explicitly disabled via build config 00:03:21.230 acl: explicitly disabled via build config 00:03:21.230 bbdev: explicitly disabled via build config 00:03:21.230 bitratestats: explicitly disabled via build config 00:03:21.230 bpf: explicitly disabled via build config 00:03:21.230 cfgfile: explicitly disabled via build config 00:03:21.230 distributor: explicitly disabled via build config 00:03:21.230 efd: explicitly disabled via build config 00:03:21.230 eventdev: explicitly disabled via build config 00:03:21.230 dispatcher: explicitly disabled via build config 00:03:21.230 gpudev: explicitly disabled via build config 00:03:21.230 gro: explicitly disabled via build config 00:03:21.230 gso: explicitly disabled via build config 00:03:21.230 ip_frag: explicitly disabled via build config 00:03:21.230 jobstats: explicitly disabled via build config 00:03:21.230 latencystats: explicitly disabled via build config 00:03:21.230 lpm: explicitly disabled via build config 00:03:21.230 member: explicitly disabled via build config 00:03:21.230 pcapng: explicitly disabled via build config 00:03:21.230 rawdev: explicitly disabled via build config 00:03:21.230 regexdev: explicitly disabled via build config 00:03:21.230 mldev: explicitly disabled via build config 00:03:21.230 rib: explicitly disabled via build config 00:03:21.230 sched: explicitly disabled via build config 00:03:21.230 stack: explicitly disabled via build config 00:03:21.230 ipsec: explicitly disabled via build config 00:03:21.230 pdcp: explicitly disabled via build config 00:03:21.230 fib: explicitly disabled via build config 00:03:21.230 port: explicitly disabled via build config 00:03:21.230 pdump: explicitly disabled via build config 00:03:21.230 table: explicitly disabled via build config 00:03:21.230 pipeline: explicitly disabled via build config 00:03:21.230 graph: explicitly disabled via build config 00:03:21.230 node: explicitly disabled via build config 00:03:21.230 00:03:21.230 drivers: 00:03:21.230 common/cpt: not in enabled drivers build config 00:03:21.230 common/dpaax: not in enabled drivers build config 00:03:21.230 common/iavf: not in enabled drivers build config 00:03:21.230 common/idpf: not in enabled drivers build config 00:03:21.230 common/ionic: not in enabled drivers build config 00:03:21.230 common/mvep: not in enabled drivers build config 00:03:21.230 common/octeontx: not in enabled drivers build config 00:03:21.230 bus/auxiliary: not in enabled drivers build config 00:03:21.230 bus/cdx: not in enabled drivers build config 00:03:21.230 bus/dpaa: not in enabled drivers build config 00:03:21.230 bus/fslmc: not in enabled drivers build config 00:03:21.230 bus/ifpga: not in enabled drivers build config 00:03:21.230 bus/platform: not in enabled drivers build config 00:03:21.230 bus/uacce: not in enabled drivers build config 00:03:21.230 bus/vmbus: not in enabled drivers build config 00:03:21.230 common/cnxk: not in enabled drivers build config 00:03:21.230 common/mlx5: not in enabled drivers build config 00:03:21.230 common/nfp: not in enabled drivers build config 00:03:21.230 common/nitrox: not in enabled drivers build config 00:03:21.230 common/qat: not in enabled drivers build config 00:03:21.230 common/sfc_efx: not in enabled drivers build config 00:03:21.230 mempool/bucket: not in enabled drivers build config 00:03:21.230 mempool/cnxk: not in enabled drivers build config 00:03:21.230 mempool/dpaa: not in enabled drivers build config 00:03:21.231 mempool/dpaa2: not in enabled drivers build config 00:03:21.231 mempool/octeontx: not in enabled drivers build config 00:03:21.231 mempool/stack: not in enabled drivers build config 00:03:21.231 dma/cnxk: not in enabled drivers build config 00:03:21.231 dma/dpaa: not in enabled drivers build config 00:03:21.231 dma/dpaa2: not in enabled drivers build config 00:03:21.231 dma/hisilicon: not in enabled drivers build config 00:03:21.231 dma/idxd: not in enabled drivers build config 00:03:21.231 dma/ioat: not in enabled drivers build config 00:03:21.231 dma/skeleton: not in enabled drivers build config 00:03:21.231 net/af_packet: not in enabled drivers build config 00:03:21.231 net/af_xdp: not in enabled drivers build config 00:03:21.231 net/ark: not in enabled drivers build config 00:03:21.231 net/atlantic: not in enabled drivers build config 00:03:21.231 net/avp: not in enabled drivers build config 00:03:21.231 net/axgbe: not in enabled drivers build config 00:03:21.231 net/bnx2x: not in enabled drivers build config 00:03:21.231 net/bnxt: not in enabled drivers build config 00:03:21.231 net/bonding: not in enabled drivers build config 00:03:21.231 net/cnxk: not in enabled drivers build config 00:03:21.231 net/cpfl: not in enabled drivers build config 00:03:21.231 net/cxgbe: not in enabled drivers build config 00:03:21.231 net/dpaa: not in enabled drivers build config 00:03:21.231 net/dpaa2: not in enabled drivers build config 00:03:21.231 net/e1000: not in enabled drivers build config 00:03:21.231 net/ena: not in enabled drivers build config 00:03:21.231 net/enetc: not in enabled drivers build config 00:03:21.231 net/enetfec: not in enabled drivers build config 00:03:21.231 net/enic: not in enabled drivers build config 00:03:21.231 net/failsafe: not in enabled drivers build config 00:03:21.231 net/fm10k: not in enabled drivers build config 00:03:21.231 net/gve: not in enabled drivers build config 00:03:21.231 net/hinic: not in enabled drivers build config 00:03:21.231 net/hns3: not in enabled drivers build config 00:03:21.231 net/i40e: not in enabled drivers build config 00:03:21.231 net/iavf: not in enabled drivers build config 00:03:21.231 net/ice: not in enabled drivers build config 00:03:21.231 net/idpf: not in enabled drivers build config 00:03:21.231 net/igc: not in enabled drivers build config 00:03:21.231 net/ionic: not in enabled drivers build config 00:03:21.231 net/ipn3ke: not in enabled drivers build config 00:03:21.231 net/ixgbe: not in enabled drivers build config 00:03:21.231 net/mana: not in enabled drivers build config 00:03:21.231 net/memif: not in enabled drivers build config 00:03:21.231 net/mlx4: not in enabled drivers build config 00:03:21.231 net/mlx5: not in enabled drivers build config 00:03:21.231 net/mvneta: not in enabled drivers build config 00:03:21.231 net/mvpp2: not in enabled drivers build config 00:03:21.231 net/netvsc: not in enabled drivers build config 00:03:21.231 net/nfb: not in enabled drivers build config 00:03:21.231 net/nfp: not in enabled drivers build config 00:03:21.231 net/ngbe: not in enabled drivers build config 00:03:21.231 net/null: not in enabled drivers build config 00:03:21.231 net/octeontx: not in enabled drivers build config 00:03:21.231 net/octeon_ep: not in enabled drivers build config 00:03:21.231 net/pcap: not in enabled drivers build config 00:03:21.231 net/pfe: not in enabled drivers build config 00:03:21.231 net/qede: not in enabled drivers build config 00:03:21.231 net/ring: not in enabled drivers build config 00:03:21.231 net/sfc: not in enabled drivers build config 00:03:21.231 net/softnic: not in enabled drivers build config 00:03:21.231 net/tap: not in enabled drivers build config 00:03:21.231 net/thunderx: not in enabled drivers build config 00:03:21.231 net/txgbe: not in enabled drivers build config 00:03:21.231 net/vdev_netvsc: not in enabled drivers build config 00:03:21.231 net/vhost: not in enabled drivers build config 00:03:21.231 net/virtio: not in enabled drivers build config 00:03:21.231 net/vmxnet3: not in enabled drivers build config 00:03:21.231 raw/*: missing internal dependency, "rawdev" 00:03:21.231 crypto/armv8: not in enabled drivers build config 00:03:21.231 crypto/bcmfs: not in enabled drivers build config 00:03:21.231 crypto/caam_jr: not in enabled drivers build config 00:03:21.231 crypto/ccp: not in enabled drivers build config 00:03:21.231 crypto/cnxk: not in enabled drivers build config 00:03:21.231 crypto/dpaa_sec: not in enabled drivers build config 00:03:21.231 crypto/dpaa2_sec: not in enabled drivers build config 00:03:21.231 crypto/ipsec_mb: not in enabled drivers build config 00:03:21.231 crypto/mlx5: not in enabled drivers build config 00:03:21.231 crypto/mvsam: not in enabled drivers build config 00:03:21.231 crypto/nitrox: not in enabled drivers build config 00:03:21.231 crypto/null: not in enabled drivers build config 00:03:21.231 crypto/octeontx: not in enabled drivers build config 00:03:21.231 crypto/openssl: not in enabled drivers build config 00:03:21.231 crypto/scheduler: not in enabled drivers build config 00:03:21.231 crypto/uadk: not in enabled drivers build config 00:03:21.231 crypto/virtio: not in enabled drivers build config 00:03:21.231 compress/isal: not in enabled drivers build config 00:03:21.231 compress/mlx5: not in enabled drivers build config 00:03:21.231 compress/nitrox: not in enabled drivers build config 00:03:21.231 compress/octeontx: not in enabled drivers build config 00:03:21.231 compress/zlib: not in enabled drivers build config 00:03:21.231 regex/*: missing internal dependency, "regexdev" 00:03:21.231 ml/*: missing internal dependency, "mldev" 00:03:21.231 vdpa/ifc: not in enabled drivers build config 00:03:21.231 vdpa/mlx5: not in enabled drivers build config 00:03:21.231 vdpa/nfp: not in enabled drivers build config 00:03:21.231 vdpa/sfc: not in enabled drivers build config 00:03:21.231 event/*: missing internal dependency, "eventdev" 00:03:21.231 baseband/*: missing internal dependency, "bbdev" 00:03:21.231 gpu/*: missing internal dependency, "gpudev" 00:03:21.231 00:03:21.231 00:03:21.491 Build targets in project: 85 00:03:21.491 00:03:21.491 DPDK 24.03.0 00:03:21.491 00:03:21.491 User defined options 00:03:21.491 buildtype : debug 00:03:21.491 default_library : shared 00:03:21.491 libdir : lib 00:03:21.491 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:21.491 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:21.491 c_link_args : 00:03:21.491 cpu_instruction_set: native 00:03:21.491 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:21.491 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:21.491 enable_docs : false 00:03:21.491 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:21.491 enable_kmods : false 00:03:21.491 max_lcores : 128 00:03:21.491 tests : false 00:03:21.491 00:03:21.491 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.761 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:22.022 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:22.022 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:22.022 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:22.022 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:22.022 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:22.022 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:22.022 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:22.022 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:22.022 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:22.022 [10/268] Linking static target lib/librte_kvargs.a 00:03:22.022 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:22.022 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:22.022 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:22.023 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:22.023 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:22.023 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:22.023 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:22.023 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:22.023 [19/268] Linking static target lib/librte_log.a 00:03:22.023 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:22.023 [21/268] Linking static target lib/librte_pci.a 00:03:22.281 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:22.281 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:22.281 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:22.281 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:22.281 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:22.281 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:22.281 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:22.542 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:22.542 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:22.542 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:22.542 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:22.542 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:22.542 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:22.542 [35/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:22.542 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:22.542 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:22.542 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:22.542 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:22.542 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:22.542 [41/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:22.542 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:22.542 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:22.542 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:22.542 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:22.542 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:22.542 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:22.542 [48/268] Linking static target lib/librte_meter.a 00:03:22.542 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:22.542 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:22.542 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:22.542 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:22.542 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:22.542 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:22.542 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:22.542 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:22.542 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:22.542 [58/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:22.542 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:22.542 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:22.542 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:22.542 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:22.542 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:22.542 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:22.542 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:22.542 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:22.542 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:22.542 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:22.542 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:22.542 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:22.542 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:22.542 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:22.542 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:22.542 [74/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:22.542 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:22.542 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:22.542 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:22.542 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:22.542 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:22.542 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:22.542 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:22.542 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:22.542 [83/268] Linking static target lib/librte_telemetry.a 00:03:22.542 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:22.542 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:22.542 [86/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.542 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:22.542 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:22.542 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:22.542 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:22.542 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:22.542 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:22.542 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:22.542 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:22.542 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:22.542 [96/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.542 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:22.542 [98/268] Linking static target lib/librte_ring.a 00:03:22.542 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:22.542 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:22.542 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:22.542 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:22.542 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:22.542 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:22.542 [105/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:22.543 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:22.543 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:22.543 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:22.543 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:22.543 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:22.543 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:22.543 [112/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:22.543 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:22.543 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:22.543 [115/268] Linking static target lib/librte_mempool.a 00:03:22.801 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:22.801 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:22.801 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:22.801 [119/268] Linking static target lib/librte_rcu.a 00:03:22.801 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:22.801 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:22.801 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:22.802 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:22.802 [124/268] Linking static target lib/librte_eal.a 00:03:22.802 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:22.802 [126/268] Linking static target lib/librte_net.a 00:03:22.802 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:22.802 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:22.802 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:22.802 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:22.802 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:22.802 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:22.802 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:22.802 [134/268] Linking static target lib/librte_cmdline.a 00:03:22.802 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.802 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:22.802 [137/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:22.802 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:22.802 [139/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:22.802 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.802 [141/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:22.802 [142/268] Linking static target lib/librte_timer.a 00:03:22.802 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:22.802 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:22.802 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:22.802 [146/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.802 [147/268] Linking target lib/librte_log.so.24.1 00:03:23.060 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:23.060 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:23.060 [150/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:23.060 [151/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:23.060 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:23.060 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:23.060 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:23.060 [155/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:23.060 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:23.060 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.060 [158/268] Linking static target lib/librte_dmadev.a 00:03:23.060 [159/268] Linking static target lib/librte_mbuf.a 00:03:23.060 [160/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.060 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:23.060 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:23.060 [163/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.061 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:23.061 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:23.061 [166/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:23.061 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:23.061 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:23.061 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:23.061 [170/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:23.061 [171/268] Linking static target lib/librte_security.a 00:03:23.061 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:23.061 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:23.061 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:23.061 [175/268] Linking target lib/librte_telemetry.so.24.1 00:03:23.061 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:23.061 [177/268] Linking target lib/librte_kvargs.so.24.1 00:03:23.061 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:23.061 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:23.061 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:23.061 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:23.061 [182/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:23.061 [183/268] Linking static target lib/librte_compressdev.a 00:03:23.061 [184/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:23.061 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:23.061 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:23.061 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:23.061 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:23.061 [189/268] Linking static target lib/librte_power.a 00:03:23.061 [190/268] Linking static target drivers/librte_bus_vdev.a 00:03:23.319 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:23.319 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:23.319 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:23.319 [194/268] Linking static target lib/librte_reorder.a 00:03:23.319 [195/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:23.319 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:23.319 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:23.319 [198/268] Linking static target lib/librte_hash.a 00:03:23.319 [199/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:23.319 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:23.320 [201/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:23.320 [202/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.320 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:23.320 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:23.320 [205/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.320 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:23.320 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:23.320 [208/268] Linking static target drivers/librte_bus_pci.a 00:03:23.577 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:23.577 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.577 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:23.577 [212/268] Linking static target drivers/librte_mempool_ring.a 00:03:23.577 [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.577 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:23.577 [215/268] Linking static target lib/librte_cryptodev.a 00:03:23.577 [216/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.577 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.577 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.836 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.836 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.836 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.836 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:23.836 [223/268] Linking static target lib/librte_ethdev.a 00:03:24.095 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.095 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.095 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.095 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.030 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:25.030 [229/268] Linking static target lib/librte_vhost.a 00:03:25.598 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.973 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.237 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.804 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.804 [234/268] Linking target lib/librte_eal.so.24.1 00:03:33.062 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:33.062 [236/268] Linking target lib/librte_ring.so.24.1 00:03:33.062 [237/268] Linking target lib/librte_meter.so.24.1 00:03:33.062 [238/268] Linking target lib/librte_timer.so.24.1 00:03:33.062 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:33.062 [240/268] Linking target lib/librte_pci.so.24.1 00:03:33.062 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:33.062 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:33.062 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:33.062 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:33.062 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:33.062 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:33.062 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:33.062 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:33.062 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:33.321 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:33.321 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:33.321 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:33.321 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:33.580 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:33.580 [255/268] Linking target lib/librte_net.so.24.1 00:03:33.580 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:33.580 [257/268] Linking target lib/librte_reorder.so.24.1 00:03:33.580 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:33.580 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:33.580 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:33.580 [261/268] Linking target lib/librte_hash.so.24.1 00:03:33.580 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:33.838 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:33.838 [264/268] Linking target lib/librte_security.so.24.1 00:03:33.838 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:33.838 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:33.838 [267/268] Linking target lib/librte_power.so.24.1 00:03:33.838 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:33.838 INFO: autodetecting backend as ninja 00:03:33.838 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:46.105 CC lib/log/log.o 00:03:46.105 CC lib/ut_mock/mock.o 00:03:46.105 CC lib/log/log_flags.o 00:03:46.105 CC lib/log/log_deprecated.o 00:03:46.105 CC lib/ut/ut.o 00:03:46.105 LIB libspdk_ut.a 00:03:46.105 LIB libspdk_ut_mock.a 00:03:46.105 LIB libspdk_log.a 00:03:46.105 SO libspdk_ut.so.2.0 00:03:46.105 SO libspdk_ut_mock.so.6.0 00:03:46.105 SO libspdk_log.so.7.1 00:03:46.105 SYMLINK libspdk_ut_mock.so 00:03:46.105 SYMLINK libspdk_ut.so 00:03:46.105 SYMLINK libspdk_log.so 00:03:46.105 CC lib/util/base64.o 00:03:46.105 CC lib/util/bit_array.o 00:03:46.105 CC lib/util/cpuset.o 00:03:46.105 CXX lib/trace_parser/trace.o 00:03:46.105 CC lib/ioat/ioat.o 00:03:46.105 CC lib/util/crc16.o 00:03:46.105 CC lib/dma/dma.o 00:03:46.105 CC lib/util/crc32.o 00:03:46.105 CC lib/util/crc32c.o 00:03:46.105 CC lib/util/crc32_ieee.o 00:03:46.105 CC lib/util/crc64.o 00:03:46.105 CC lib/util/dif.o 00:03:46.105 CC lib/util/fd.o 00:03:46.105 CC lib/util/fd_group.o 00:03:46.105 CC lib/util/file.o 00:03:46.105 CC lib/util/hexlify.o 00:03:46.105 CC lib/util/iov.o 00:03:46.105 CC lib/util/math.o 00:03:46.105 CC lib/util/net.o 00:03:46.105 CC lib/util/pipe.o 00:03:46.105 CC lib/util/strerror_tls.o 00:03:46.105 CC lib/util/string.o 00:03:46.105 CC lib/util/uuid.o 00:03:46.105 CC lib/util/xor.o 00:03:46.105 CC lib/util/zipf.o 00:03:46.105 CC lib/util/md5.o 00:03:46.105 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.105 CC lib/vfio_user/host/vfio_user.o 00:03:46.105 LIB libspdk_dma.a 00:03:46.105 SO libspdk_dma.so.5.0 00:03:46.105 LIB libspdk_ioat.a 00:03:46.105 SYMLINK libspdk_dma.so 00:03:46.105 SO libspdk_ioat.so.7.0 00:03:46.105 SYMLINK libspdk_ioat.so 00:03:46.105 LIB libspdk_vfio_user.a 00:03:46.105 SO libspdk_vfio_user.so.5.0 00:03:46.105 LIB libspdk_util.a 00:03:46.106 SYMLINK libspdk_vfio_user.so 00:03:46.106 SO libspdk_util.so.10.1 00:03:46.106 SYMLINK libspdk_util.so 00:03:46.106 LIB libspdk_trace_parser.a 00:03:46.106 SO libspdk_trace_parser.so.6.0 00:03:46.106 SYMLINK libspdk_trace_parser.so 00:03:46.106 CC lib/env_dpdk/env.o 00:03:46.106 CC lib/env_dpdk/memory.o 00:03:46.106 CC lib/env_dpdk/pci.o 00:03:46.106 CC lib/env_dpdk/init.o 00:03:46.106 CC lib/env_dpdk/threads.o 00:03:46.106 CC lib/env_dpdk/pci_ioat.o 00:03:46.106 CC lib/env_dpdk/pci_virtio.o 00:03:46.106 CC lib/rdma_provider/common.o 00:03:46.106 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:46.106 CC lib/env_dpdk/pci_vmd.o 00:03:46.106 CC lib/conf/conf.o 00:03:46.106 CC lib/env_dpdk/pci_idxd.o 00:03:46.106 CC lib/env_dpdk/pci_event.o 00:03:46.106 CC lib/env_dpdk/sigbus_handler.o 00:03:46.106 CC lib/env_dpdk/pci_dpdk.o 00:03:46.106 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:46.106 CC lib/vmd/vmd.o 00:03:46.106 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:46.106 CC lib/vmd/led.o 00:03:46.106 CC lib/json/json_parse.o 00:03:46.106 CC lib/json/json_util.o 00:03:46.106 CC lib/rdma_utils/rdma_utils.o 00:03:46.106 CC lib/json/json_write.o 00:03:46.106 CC lib/idxd/idxd.o 00:03:46.106 CC lib/idxd/idxd_user.o 00:03:46.106 CC lib/idxd/idxd_kernel.o 00:03:46.106 LIB libspdk_rdma_provider.a 00:03:46.106 SO libspdk_rdma_provider.so.6.0 00:03:46.106 LIB libspdk_conf.a 00:03:46.106 SO libspdk_conf.so.6.0 00:03:46.106 SYMLINK libspdk_rdma_provider.so 00:03:46.106 LIB libspdk_rdma_utils.a 00:03:46.106 LIB libspdk_json.a 00:03:46.106 SO libspdk_rdma_utils.so.1.0 00:03:46.106 SYMLINK libspdk_conf.so 00:03:46.106 SO libspdk_json.so.6.0 00:03:46.365 SYMLINK libspdk_rdma_utils.so 00:03:46.365 SYMLINK libspdk_json.so 00:03:46.365 LIB libspdk_idxd.a 00:03:46.365 LIB libspdk_vmd.a 00:03:46.365 SO libspdk_idxd.so.12.1 00:03:46.365 SO libspdk_vmd.so.6.0 00:03:46.624 SYMLINK libspdk_idxd.so 00:03:46.624 SYMLINK libspdk_vmd.so 00:03:46.624 CC lib/jsonrpc/jsonrpc_server.o 00:03:46.624 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:46.624 CC lib/jsonrpc/jsonrpc_client.o 00:03:46.624 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:46.883 LIB libspdk_jsonrpc.a 00:03:46.883 SO libspdk_jsonrpc.so.6.0 00:03:46.883 SYMLINK libspdk_jsonrpc.so 00:03:46.883 LIB libspdk_env_dpdk.a 00:03:46.883 SO libspdk_env_dpdk.so.15.1 00:03:47.142 SYMLINK libspdk_env_dpdk.so 00:03:47.142 CC lib/rpc/rpc.o 00:03:47.402 LIB libspdk_rpc.a 00:03:47.402 SO libspdk_rpc.so.6.0 00:03:47.402 SYMLINK libspdk_rpc.so 00:03:47.661 CC lib/trace/trace.o 00:03:47.661 CC lib/trace/trace_flags.o 00:03:47.661 CC lib/trace/trace_rpc.o 00:03:47.661 CC lib/keyring/keyring.o 00:03:47.661 CC lib/keyring/keyring_rpc.o 00:03:47.661 CC lib/notify/notify.o 00:03:47.661 CC lib/notify/notify_rpc.o 00:03:47.920 LIB libspdk_notify.a 00:03:47.920 LIB libspdk_keyring.a 00:03:47.920 SO libspdk_notify.so.6.0 00:03:47.920 LIB libspdk_trace.a 00:03:47.920 SO libspdk_keyring.so.2.0 00:03:47.920 SO libspdk_trace.so.11.0 00:03:47.920 SYMLINK libspdk_notify.so 00:03:48.179 SYMLINK libspdk_keyring.so 00:03:48.179 SYMLINK libspdk_trace.so 00:03:48.438 CC lib/thread/thread.o 00:03:48.438 CC lib/thread/iobuf.o 00:03:48.438 CC lib/sock/sock.o 00:03:48.438 CC lib/sock/sock_rpc.o 00:03:48.697 LIB libspdk_sock.a 00:03:48.697 SO libspdk_sock.so.10.0 00:03:48.697 SYMLINK libspdk_sock.so 00:03:49.264 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.264 CC lib/nvme/nvme_ctrlr.o 00:03:49.264 CC lib/nvme/nvme_fabric.o 00:03:49.264 CC lib/nvme/nvme_ns_cmd.o 00:03:49.264 CC lib/nvme/nvme_ns.o 00:03:49.264 CC lib/nvme/nvme_pcie_common.o 00:03:49.264 CC lib/nvme/nvme_pcie.o 00:03:49.264 CC lib/nvme/nvme_qpair.o 00:03:49.264 CC lib/nvme/nvme.o 00:03:49.264 CC lib/nvme/nvme_quirks.o 00:03:49.264 CC lib/nvme/nvme_transport.o 00:03:49.264 CC lib/nvme/nvme_discovery.o 00:03:49.264 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.264 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.264 CC lib/nvme/nvme_tcp.o 00:03:49.264 CC lib/nvme/nvme_opal.o 00:03:49.264 CC lib/nvme/nvme_io_msg.o 00:03:49.264 CC lib/nvme/nvme_poll_group.o 00:03:49.264 CC lib/nvme/nvme_zns.o 00:03:49.264 CC lib/nvme/nvme_stubs.o 00:03:49.264 CC lib/nvme/nvme_auth.o 00:03:49.264 CC lib/nvme/nvme_cuse.o 00:03:49.264 CC lib/nvme/nvme_vfio_user.o 00:03:49.264 CC lib/nvme/nvme_rdma.o 00:03:49.523 LIB libspdk_thread.a 00:03:49.523 SO libspdk_thread.so.10.2 00:03:49.523 SYMLINK libspdk_thread.so 00:03:49.780 CC lib/vfu_tgt/tgt_rpc.o 00:03:49.780 CC lib/vfu_tgt/tgt_endpoint.o 00:03:49.780 CC lib/fsdev/fsdev.o 00:03:49.780 CC lib/fsdev/fsdev_io.o 00:03:49.780 CC lib/virtio/virtio.o 00:03:49.780 CC lib/fsdev/fsdev_rpc.o 00:03:49.780 CC lib/virtio/virtio_vhost_user.o 00:03:49.780 CC lib/blob/blobstore.o 00:03:49.780 CC lib/virtio/virtio_vfio_user.o 00:03:49.780 CC lib/blob/request.o 00:03:49.780 CC lib/virtio/virtio_pci.o 00:03:49.780 CC lib/blob/zeroes.o 00:03:49.780 CC lib/blob/blob_bs_dev.o 00:03:50.039 CC lib/accel/accel.o 00:03:50.039 CC lib/accel/accel_rpc.o 00:03:50.039 CC lib/accel/accel_sw.o 00:03:50.039 CC lib/init/json_config.o 00:03:50.039 CC lib/init/subsystem.o 00:03:50.039 CC lib/init/subsystem_rpc.o 00:03:50.039 CC lib/init/rpc.o 00:03:50.039 LIB libspdk_init.a 00:03:50.298 LIB libspdk_vfu_tgt.a 00:03:50.298 SO libspdk_init.so.6.0 00:03:50.298 LIB libspdk_virtio.a 00:03:50.298 SO libspdk_vfu_tgt.so.3.0 00:03:50.298 SO libspdk_virtio.so.7.0 00:03:50.298 SYMLINK libspdk_init.so 00:03:50.298 SYMLINK libspdk_vfu_tgt.so 00:03:50.298 SYMLINK libspdk_virtio.so 00:03:50.298 LIB libspdk_fsdev.a 00:03:50.558 SO libspdk_fsdev.so.1.0 00:03:50.558 SYMLINK libspdk_fsdev.so 00:03:50.558 CC lib/event/app.o 00:03:50.558 CC lib/event/reactor.o 00:03:50.558 CC lib/event/log_rpc.o 00:03:50.558 CC lib/event/app_rpc.o 00:03:50.558 CC lib/event/scheduler_static.o 00:03:50.817 LIB libspdk_accel.a 00:03:50.817 SO libspdk_accel.so.16.0 00:03:50.817 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:50.817 SYMLINK libspdk_accel.so 00:03:50.817 LIB libspdk_nvme.a 00:03:50.817 LIB libspdk_event.a 00:03:50.817 SO libspdk_nvme.so.15.0 00:03:51.077 SO libspdk_event.so.14.0 00:03:51.077 SYMLINK libspdk_event.so 00:03:51.077 CC lib/bdev/bdev.o 00:03:51.077 CC lib/bdev/bdev_rpc.o 00:03:51.077 CC lib/bdev/bdev_zone.o 00:03:51.077 CC lib/bdev/part.o 00:03:51.077 SYMLINK libspdk_nvme.so 00:03:51.077 CC lib/bdev/scsi_nvme.o 00:03:51.337 LIB libspdk_fuse_dispatcher.a 00:03:51.337 SO libspdk_fuse_dispatcher.so.1.0 00:03:51.337 SYMLINK libspdk_fuse_dispatcher.so 00:03:52.275 LIB libspdk_blob.a 00:03:52.275 SO libspdk_blob.so.11.0 00:03:52.275 SYMLINK libspdk_blob.so 00:03:52.534 CC lib/blobfs/blobfs.o 00:03:52.534 CC lib/blobfs/tree.o 00:03:52.534 CC lib/lvol/lvol.o 00:03:52.793 LIB libspdk_bdev.a 00:03:53.052 SO libspdk_bdev.so.17.0 00:03:53.052 SYMLINK libspdk_bdev.so 00:03:53.052 LIB libspdk_blobfs.a 00:03:53.052 SO libspdk_blobfs.so.10.0 00:03:53.052 LIB libspdk_lvol.a 00:03:53.310 SYMLINK libspdk_blobfs.so 00:03:53.310 SO libspdk_lvol.so.10.0 00:03:53.310 SYMLINK libspdk_lvol.so 00:03:53.310 CC lib/ftl/ftl_core.o 00:03:53.310 CC lib/ftl/ftl_init.o 00:03:53.310 CC lib/ftl/ftl_layout.o 00:03:53.310 CC lib/ftl/ftl_debug.o 00:03:53.310 CC lib/ftl/ftl_io.o 00:03:53.310 CC lib/scsi/dev.o 00:03:53.310 CC lib/ublk/ublk.o 00:03:53.310 CC lib/ftl/ftl_sb.o 00:03:53.310 CC lib/scsi/lun.o 00:03:53.310 CC lib/ftl/ftl_l2p.o 00:03:53.310 CC lib/ublk/ublk_rpc.o 00:03:53.310 CC lib/scsi/port.o 00:03:53.310 CC lib/nvmf/ctrlr.o 00:03:53.310 CC lib/ftl/ftl_l2p_flat.o 00:03:53.310 CC lib/nbd/nbd.o 00:03:53.310 CC lib/scsi/scsi.o 00:03:53.310 CC lib/ftl/ftl_nv_cache.o 00:03:53.310 CC lib/nvmf/ctrlr_discovery.o 00:03:53.310 CC lib/scsi/scsi_bdev.o 00:03:53.310 CC lib/nbd/nbd_rpc.o 00:03:53.310 CC lib/ftl/ftl_band.o 00:03:53.310 CC lib/nvmf/ctrlr_bdev.o 00:03:53.310 CC lib/scsi/scsi_pr.o 00:03:53.310 CC lib/nvmf/subsystem.o 00:03:53.310 CC lib/ftl/ftl_band_ops.o 00:03:53.310 CC lib/scsi/scsi_rpc.o 00:03:53.310 CC lib/nvmf/nvmf.o 00:03:53.310 CC lib/nvmf/nvmf_rpc.o 00:03:53.310 CC lib/ftl/ftl_writer.o 00:03:53.310 CC lib/nvmf/transport.o 00:03:53.310 CC lib/scsi/task.o 00:03:53.310 CC lib/ftl/ftl_rq.o 00:03:53.310 CC lib/ftl/ftl_reloc.o 00:03:53.310 CC lib/nvmf/stubs.o 00:03:53.310 CC lib/nvmf/tcp.o 00:03:53.310 CC lib/nvmf/mdns_server.o 00:03:53.310 CC lib/ftl/ftl_l2p_cache.o 00:03:53.310 CC lib/nvmf/vfio_user.o 00:03:53.310 CC lib/ftl/ftl_p2l.o 00:03:53.310 CC lib/ftl/ftl_p2l_log.o 00:03:53.310 CC lib/nvmf/rdma.o 00:03:53.310 CC lib/nvmf/auth.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:53.310 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:53.311 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:53.311 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:53.311 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:53.311 CC lib/ftl/utils/ftl_conf.o 00:03:53.311 CC lib/ftl/utils/ftl_md.o 00:03:53.311 CC lib/ftl/utils/ftl_mempool.o 00:03:53.311 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.311 CC lib/ftl/utils/ftl_property.o 00:03:53.311 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.311 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.311 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.311 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.311 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.311 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.311 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:53.311 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.311 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.311 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.311 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:53.311 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.311 CC lib/ftl/ftl_trace.o 00:03:53.311 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:53.311 CC lib/ftl/base/ftl_base_dev.o 00:03:53.311 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.877 LIB libspdk_nbd.a 00:03:53.877 SO libspdk_nbd.so.7.0 00:03:53.877 LIB libspdk_scsi.a 00:03:53.877 SYMLINK libspdk_nbd.so 00:03:53.877 SO libspdk_scsi.so.9.0 00:03:54.135 SYMLINK libspdk_scsi.so 00:03:54.135 LIB libspdk_ublk.a 00:03:54.135 SO libspdk_ublk.so.3.0 00:03:54.135 SYMLINK libspdk_ublk.so 00:03:54.395 CC lib/iscsi/conn.o 00:03:54.395 CC lib/vhost/vhost.o 00:03:54.395 CC lib/iscsi/init_grp.o 00:03:54.395 CC lib/iscsi/iscsi.o 00:03:54.395 CC lib/vhost/vhost_rpc.o 00:03:54.395 CC lib/iscsi/param.o 00:03:54.395 CC lib/vhost/vhost_scsi.o 00:03:54.395 CC lib/iscsi/portal_grp.o 00:03:54.395 CC lib/vhost/vhost_blk.o 00:03:54.395 CC lib/iscsi/tgt_node.o 00:03:54.395 CC lib/vhost/rte_vhost_user.o 00:03:54.395 CC lib/iscsi/iscsi_subsystem.o 00:03:54.395 CC lib/iscsi/iscsi_rpc.o 00:03:54.395 CC lib/iscsi/task.o 00:03:54.395 LIB libspdk_ftl.a 00:03:54.653 SO libspdk_ftl.so.9.0 00:03:54.653 SYMLINK libspdk_ftl.so 00:03:55.222 LIB libspdk_nvmf.a 00:03:55.222 LIB libspdk_vhost.a 00:03:55.222 SO libspdk_vhost.so.8.0 00:03:55.222 SO libspdk_nvmf.so.19.0 00:03:55.222 SYMLINK libspdk_vhost.so 00:03:55.222 LIB libspdk_iscsi.a 00:03:55.482 SO libspdk_iscsi.so.8.0 00:03:55.482 SYMLINK libspdk_nvmf.so 00:03:55.482 SYMLINK libspdk_iscsi.so 00:03:56.051 CC module/env_dpdk/env_dpdk_rpc.o 00:03:56.051 CC module/vfu_device/vfu_virtio.o 00:03:56.051 CC module/vfu_device/vfu_virtio_rpc.o 00:03:56.051 CC module/vfu_device/vfu_virtio_scsi.o 00:03:56.051 CC module/vfu_device/vfu_virtio_blk.o 00:03:56.051 CC module/vfu_device/vfu_virtio_fs.o 00:03:56.051 CC module/scheduler/gscheduler/gscheduler.o 00:03:56.051 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:56.051 CC module/accel/dsa/accel_dsa.o 00:03:56.051 CC module/accel/dsa/accel_dsa_rpc.o 00:03:56.051 CC module/accel/error/accel_error.o 00:03:56.051 CC module/accel/error/accel_error_rpc.o 00:03:56.051 CC module/accel/ioat/accel_ioat.o 00:03:56.051 CC module/accel/ioat/accel_ioat_rpc.o 00:03:56.051 CC module/keyring/file/keyring.o 00:03:56.051 CC module/sock/posix/posix.o 00:03:56.051 CC module/keyring/file/keyring_rpc.o 00:03:56.051 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:56.051 LIB libspdk_env_dpdk_rpc.a 00:03:56.051 CC module/fsdev/aio/linux_aio_mgr.o 00:03:56.051 CC module/fsdev/aio/fsdev_aio.o 00:03:56.309 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:56.309 CC module/accel/iaa/accel_iaa.o 00:03:56.309 CC module/blob/bdev/blob_bdev.o 00:03:56.309 CC module/keyring/linux/keyring_rpc.o 00:03:56.309 CC module/accel/iaa/accel_iaa_rpc.o 00:03:56.309 CC module/keyring/linux/keyring.o 00:03:56.309 SO libspdk_env_dpdk_rpc.so.6.0 00:03:56.309 SYMLINK libspdk_env_dpdk_rpc.so 00:03:56.309 LIB libspdk_scheduler_gscheduler.a 00:03:56.309 LIB libspdk_keyring_file.a 00:03:56.309 SO libspdk_scheduler_gscheduler.so.4.0 00:03:56.309 LIB libspdk_keyring_linux.a 00:03:56.309 SO libspdk_keyring_file.so.2.0 00:03:56.309 LIB libspdk_scheduler_dpdk_governor.a 00:03:56.309 LIB libspdk_accel_error.a 00:03:56.309 LIB libspdk_accel_ioat.a 00:03:56.309 SO libspdk_keyring_linux.so.1.0 00:03:56.309 LIB libspdk_scheduler_dynamic.a 00:03:56.309 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:56.309 SO libspdk_accel_error.so.2.0 00:03:56.309 SYMLINK libspdk_scheduler_gscheduler.so 00:03:56.309 SO libspdk_scheduler_dynamic.so.4.0 00:03:56.309 SO libspdk_accel_ioat.so.6.0 00:03:56.309 LIB libspdk_accel_iaa.a 00:03:56.309 SYMLINK libspdk_keyring_file.so 00:03:56.309 SYMLINK libspdk_keyring_linux.so 00:03:56.568 SYMLINK libspdk_accel_error.so 00:03:56.568 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:56.568 SO libspdk_accel_iaa.so.3.0 00:03:56.568 LIB libspdk_accel_dsa.a 00:03:56.568 LIB libspdk_blob_bdev.a 00:03:56.568 SYMLINK libspdk_scheduler_dynamic.so 00:03:56.568 SYMLINK libspdk_accel_ioat.so 00:03:56.568 SO libspdk_blob_bdev.so.11.0 00:03:56.568 SO libspdk_accel_dsa.so.5.0 00:03:56.568 SYMLINK libspdk_accel_iaa.so 00:03:56.568 SYMLINK libspdk_accel_dsa.so 00:03:56.568 SYMLINK libspdk_blob_bdev.so 00:03:56.568 LIB libspdk_vfu_device.a 00:03:56.568 SO libspdk_vfu_device.so.3.0 00:03:56.568 SYMLINK libspdk_vfu_device.so 00:03:56.827 LIB libspdk_fsdev_aio.a 00:03:56.827 LIB libspdk_sock_posix.a 00:03:56.827 SO libspdk_fsdev_aio.so.1.0 00:03:56.827 SO libspdk_sock_posix.so.6.0 00:03:56.827 SYMLINK libspdk_fsdev_aio.so 00:03:56.827 SYMLINK libspdk_sock_posix.so 00:03:57.087 CC module/bdev/gpt/gpt.o 00:03:57.087 CC module/bdev/gpt/vbdev_gpt.o 00:03:57.087 CC module/blobfs/bdev/blobfs_bdev.o 00:03:57.087 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:57.087 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:57.087 CC module/bdev/malloc/bdev_malloc.o 00:03:57.087 CC module/bdev/error/vbdev_error.o 00:03:57.087 CC module/bdev/error/vbdev_error_rpc.o 00:03:57.087 CC module/bdev/split/vbdev_split_rpc.o 00:03:57.087 CC module/bdev/split/vbdev_split.o 00:03:57.087 CC module/bdev/lvol/vbdev_lvol.o 00:03:57.087 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:57.087 CC module/bdev/delay/vbdev_delay.o 00:03:57.087 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:57.087 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:57.087 CC module/bdev/raid/bdev_raid.o 00:03:57.087 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:57.087 CC module/bdev/null/bdev_null.o 00:03:57.087 CC module/bdev/raid/bdev_raid_rpc.o 00:03:57.087 CC module/bdev/raid/bdev_raid_sb.o 00:03:57.087 CC module/bdev/passthru/vbdev_passthru.o 00:03:57.087 CC module/bdev/null/bdev_null_rpc.o 00:03:57.087 CC module/bdev/raid/raid0.o 00:03:57.087 CC module/bdev/iscsi/bdev_iscsi.o 00:03:57.087 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:57.087 CC module/bdev/raid/raid1.o 00:03:57.087 CC module/bdev/ftl/bdev_ftl.o 00:03:57.087 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:57.087 CC module/bdev/raid/concat.o 00:03:57.087 CC module/bdev/aio/bdev_aio_rpc.o 00:03:57.087 CC module/bdev/aio/bdev_aio.o 00:03:57.087 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:57.087 CC module/bdev/nvme/bdev_nvme.o 00:03:57.087 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:57.087 CC module/bdev/nvme/bdev_mdns_client.o 00:03:57.087 CC module/bdev/nvme/nvme_rpc.o 00:03:57.087 CC module/bdev/nvme/vbdev_opal.o 00:03:57.087 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:57.087 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:57.087 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:57.087 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.087 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:57.345 LIB libspdk_blobfs_bdev.a 00:03:57.345 LIB libspdk_bdev_split.a 00:03:57.345 SO libspdk_blobfs_bdev.so.6.0 00:03:57.345 LIB libspdk_bdev_error.a 00:03:57.345 SO libspdk_bdev_split.so.6.0 00:03:57.345 SO libspdk_bdev_error.so.6.0 00:03:57.345 LIB libspdk_bdev_gpt.a 00:03:57.345 LIB libspdk_bdev_null.a 00:03:57.345 LIB libspdk_bdev_passthru.a 00:03:57.345 SYMLINK libspdk_blobfs_bdev.so 00:03:57.345 SO libspdk_bdev_null.so.6.0 00:03:57.345 SYMLINK libspdk_bdev_split.so 00:03:57.345 SO libspdk_bdev_gpt.so.6.0 00:03:57.345 SO libspdk_bdev_passthru.so.6.0 00:03:57.345 SYMLINK libspdk_bdev_error.so 00:03:57.345 LIB libspdk_bdev_zone_block.a 00:03:57.345 LIB libspdk_bdev_malloc.a 00:03:57.345 LIB libspdk_bdev_ftl.a 00:03:57.345 LIB libspdk_bdev_aio.a 00:03:57.345 LIB libspdk_bdev_delay.a 00:03:57.345 SO libspdk_bdev_zone_block.so.6.0 00:03:57.345 SO libspdk_bdev_ftl.so.6.0 00:03:57.345 SO libspdk_bdev_malloc.so.6.0 00:03:57.345 LIB libspdk_bdev_iscsi.a 00:03:57.345 SO libspdk_bdev_aio.so.6.0 00:03:57.345 SO libspdk_bdev_delay.so.6.0 00:03:57.345 SYMLINK libspdk_bdev_null.so 00:03:57.345 SYMLINK libspdk_bdev_gpt.so 00:03:57.345 SYMLINK libspdk_bdev_passthru.so 00:03:57.345 SO libspdk_bdev_iscsi.so.6.0 00:03:57.604 SYMLINK libspdk_bdev_ftl.so 00:03:57.604 SYMLINK libspdk_bdev_zone_block.so 00:03:57.604 SYMLINK libspdk_bdev_malloc.so 00:03:57.604 SYMLINK libspdk_bdev_delay.so 00:03:57.604 SYMLINK libspdk_bdev_aio.so 00:03:57.604 SYMLINK libspdk_bdev_iscsi.so 00:03:57.604 LIB libspdk_bdev_lvol.a 00:03:57.604 LIB libspdk_bdev_virtio.a 00:03:57.604 SO libspdk_bdev_lvol.so.6.0 00:03:57.604 SO libspdk_bdev_virtio.so.6.0 00:03:57.604 SYMLINK libspdk_bdev_lvol.so 00:03:57.604 SYMLINK libspdk_bdev_virtio.so 00:03:57.862 LIB libspdk_bdev_raid.a 00:03:57.862 SO libspdk_bdev_raid.so.6.0 00:03:57.862 SYMLINK libspdk_bdev_raid.so 00:03:58.798 LIB libspdk_bdev_nvme.a 00:03:58.798 SO libspdk_bdev_nvme.so.7.0 00:03:58.798 SYMLINK libspdk_bdev_nvme.so 00:03:59.398 CC module/event/subsystems/vmd/vmd.o 00:03:59.398 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:59.398 CC module/event/subsystems/iobuf/iobuf.o 00:03:59.398 CC module/event/subsystems/sock/sock.o 00:03:59.398 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:59.398 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:59.398 CC module/event/subsystems/keyring/keyring.o 00:03:59.398 CC module/event/subsystems/scheduler/scheduler.o 00:03:59.398 CC module/event/subsystems/fsdev/fsdev.o 00:03:59.398 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:59.675 LIB libspdk_event_vfu_tgt.a 00:03:59.675 LIB libspdk_event_vhost_blk.a 00:03:59.675 LIB libspdk_event_vmd.a 00:03:59.675 LIB libspdk_event_keyring.a 00:03:59.676 LIB libspdk_event_fsdev.a 00:03:59.676 LIB libspdk_event_scheduler.a 00:03:59.676 LIB libspdk_event_sock.a 00:03:59.676 SO libspdk_event_vfu_tgt.so.3.0 00:03:59.676 LIB libspdk_event_iobuf.a 00:03:59.676 SO libspdk_event_keyring.so.1.0 00:03:59.676 SO libspdk_event_vhost_blk.so.3.0 00:03:59.676 SO libspdk_event_vmd.so.6.0 00:03:59.676 SO libspdk_event_fsdev.so.1.0 00:03:59.676 SO libspdk_event_scheduler.so.4.0 00:03:59.676 SO libspdk_event_sock.so.5.0 00:03:59.676 SO libspdk_event_iobuf.so.3.0 00:03:59.676 SYMLINK libspdk_event_vfu_tgt.so 00:03:59.676 SYMLINK libspdk_event_keyring.so 00:03:59.676 SYMLINK libspdk_event_vhost_blk.so 00:03:59.676 SYMLINK libspdk_event_fsdev.so 00:03:59.676 SYMLINK libspdk_event_vmd.so 00:03:59.676 SYMLINK libspdk_event_scheduler.so 00:03:59.676 SYMLINK libspdk_event_iobuf.so 00:03:59.676 SYMLINK libspdk_event_sock.so 00:03:59.966 CC module/event/subsystems/accel/accel.o 00:04:00.226 LIB libspdk_event_accel.a 00:04:00.226 SO libspdk_event_accel.so.6.0 00:04:00.226 SYMLINK libspdk_event_accel.so 00:04:00.484 CC module/event/subsystems/bdev/bdev.o 00:04:00.743 LIB libspdk_event_bdev.a 00:04:00.743 SO libspdk_event_bdev.so.6.0 00:04:00.743 SYMLINK libspdk_event_bdev.so 00:04:01.311 CC module/event/subsystems/nbd/nbd.o 00:04:01.311 CC module/event/subsystems/scsi/scsi.o 00:04:01.311 CC module/event/subsystems/ublk/ublk.o 00:04:01.311 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.311 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.311 LIB libspdk_event_nbd.a 00:04:01.311 LIB libspdk_event_ublk.a 00:04:01.311 LIB libspdk_event_scsi.a 00:04:01.311 SO libspdk_event_nbd.so.6.0 00:04:01.311 SO libspdk_event_ublk.so.3.0 00:04:01.311 SO libspdk_event_scsi.so.6.0 00:04:01.311 LIB libspdk_event_nvmf.a 00:04:01.311 SYMLINK libspdk_event_nbd.so 00:04:01.311 SO libspdk_event_nvmf.so.6.0 00:04:01.311 SYMLINK libspdk_event_ublk.so 00:04:01.311 SYMLINK libspdk_event_scsi.so 00:04:01.571 SYMLINK libspdk_event_nvmf.so 00:04:01.830 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:01.830 CC module/event/subsystems/iscsi/iscsi.o 00:04:01.830 LIB libspdk_event_vhost_scsi.a 00:04:01.830 LIB libspdk_event_iscsi.a 00:04:01.830 SO libspdk_event_vhost_scsi.so.3.0 00:04:01.830 SO libspdk_event_iscsi.so.6.0 00:04:01.830 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.089 SYMLINK libspdk_event_iscsi.so 00:04:02.089 SO libspdk.so.6.0 00:04:02.089 SYMLINK libspdk.so 00:04:02.664 CXX app/trace/trace.o 00:04:02.664 CC test/rpc_client/rpc_client_test.o 00:04:02.664 CC app/spdk_nvme_identify/identify.o 00:04:02.664 CC app/trace_record/trace_record.o 00:04:02.664 CC app/spdk_top/spdk_top.o 00:04:02.664 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.664 CC app/spdk_lspci/spdk_lspci.o 00:04:02.664 TEST_HEADER include/spdk/accel_module.h 00:04:02.664 TEST_HEADER include/spdk/assert.h 00:04:02.664 TEST_HEADER include/spdk/accel.h 00:04:02.664 CC app/spdk_nvme_perf/perf.o 00:04:02.664 TEST_HEADER include/spdk/base64.h 00:04:02.664 TEST_HEADER include/spdk/barrier.h 00:04:02.664 TEST_HEADER include/spdk/bdev.h 00:04:02.664 TEST_HEADER include/spdk/bdev_module.h 00:04:02.664 TEST_HEADER include/spdk/bit_array.h 00:04:02.664 TEST_HEADER include/spdk/bit_pool.h 00:04:02.664 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.664 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.664 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.664 TEST_HEADER include/spdk/conf.h 00:04:02.664 TEST_HEADER include/spdk/blobfs.h 00:04:02.664 TEST_HEADER include/spdk/blob.h 00:04:02.664 TEST_HEADER include/spdk/config.h 00:04:02.664 TEST_HEADER include/spdk/cpuset.h 00:04:02.664 TEST_HEADER include/spdk/crc16.h 00:04:02.664 TEST_HEADER include/spdk/crc64.h 00:04:02.664 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.664 TEST_HEADER include/spdk/crc32.h 00:04:02.664 TEST_HEADER include/spdk/dif.h 00:04:02.664 TEST_HEADER include/spdk/endian.h 00:04:02.664 TEST_HEADER include/spdk/dma.h 00:04:02.664 TEST_HEADER include/spdk/event.h 00:04:02.664 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.664 TEST_HEADER include/spdk/env.h 00:04:02.664 TEST_HEADER include/spdk/fd_group.h 00:04:02.664 TEST_HEADER include/spdk/fd.h 00:04:02.664 TEST_HEADER include/spdk/file.h 00:04:02.664 TEST_HEADER include/spdk/fsdev.h 00:04:02.664 TEST_HEADER include/spdk/fsdev_module.h 00:04:02.664 TEST_HEADER include/spdk/ftl.h 00:04:02.664 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.664 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:02.664 TEST_HEADER include/spdk/hexlify.h 00:04:02.664 TEST_HEADER include/spdk/histogram_data.h 00:04:02.664 TEST_HEADER include/spdk/idxd.h 00:04:02.664 TEST_HEADER include/spdk/init.h 00:04:02.664 TEST_HEADER include/spdk/ioat.h 00:04:02.664 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.664 CC app/iscsi_tgt/iscsi_tgt.o 00:04:02.664 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.664 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.664 TEST_HEADER include/spdk/json.h 00:04:02.664 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.664 TEST_HEADER include/spdk/keyring_module.h 00:04:02.664 TEST_HEADER include/spdk/keyring.h 00:04:02.664 TEST_HEADER include/spdk/log.h 00:04:02.664 TEST_HEADER include/spdk/likely.h 00:04:02.664 TEST_HEADER include/spdk/md5.h 00:04:02.664 TEST_HEADER include/spdk/lvol.h 00:04:02.664 TEST_HEADER include/spdk/mmio.h 00:04:02.664 TEST_HEADER include/spdk/memory.h 00:04:02.664 TEST_HEADER include/spdk/nbd.h 00:04:02.664 TEST_HEADER include/spdk/notify.h 00:04:02.664 TEST_HEADER include/spdk/nvme.h 00:04:02.664 TEST_HEADER include/spdk/net.h 00:04:02.664 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.664 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.664 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.664 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.664 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.664 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.664 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.664 TEST_HEADER include/spdk/nvmf.h 00:04:02.664 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.664 TEST_HEADER include/spdk/opal.h 00:04:02.664 TEST_HEADER include/spdk/opal_spec.h 00:04:02.664 TEST_HEADER include/spdk/pci_ids.h 00:04:02.664 TEST_HEADER include/spdk/nvmf_transport.h 00:04:02.664 TEST_HEADER include/spdk/pipe.h 00:04:02.664 TEST_HEADER include/spdk/queue.h 00:04:02.664 TEST_HEADER include/spdk/reduce.h 00:04:02.664 TEST_HEADER include/spdk/scheduler.h 00:04:02.664 TEST_HEADER include/spdk/rpc.h 00:04:02.664 TEST_HEADER include/spdk/scsi.h 00:04:02.664 TEST_HEADER include/spdk/scsi_spec.h 00:04:02.664 TEST_HEADER include/spdk/sock.h 00:04:02.664 TEST_HEADER include/spdk/stdinc.h 00:04:02.664 TEST_HEADER include/spdk/thread.h 00:04:02.664 TEST_HEADER include/spdk/trace.h 00:04:02.664 TEST_HEADER include/spdk/string.h 00:04:02.664 TEST_HEADER include/spdk/trace_parser.h 00:04:02.664 TEST_HEADER include/spdk/tree.h 00:04:02.664 TEST_HEADER include/spdk/ublk.h 00:04:02.664 TEST_HEADER include/spdk/util.h 00:04:02.664 TEST_HEADER include/spdk/uuid.h 00:04:02.664 CC app/spdk_dd/spdk_dd.o 00:04:02.664 TEST_HEADER include/spdk/version.h 00:04:02.664 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:02.664 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:02.664 TEST_HEADER include/spdk/vhost.h 00:04:02.664 TEST_HEADER include/spdk/vmd.h 00:04:02.664 TEST_HEADER include/spdk/xor.h 00:04:02.664 TEST_HEADER include/spdk/zipf.h 00:04:02.664 CXX test/cpp_headers/accel.o 00:04:02.664 CC app/spdk_tgt/spdk_tgt.o 00:04:02.664 CXX test/cpp_headers/accel_module.o 00:04:02.664 CXX test/cpp_headers/assert.o 00:04:02.664 CXX test/cpp_headers/base64.o 00:04:02.664 CXX test/cpp_headers/barrier.o 00:04:02.664 CXX test/cpp_headers/bdev.o 00:04:02.664 CXX test/cpp_headers/bdev_zone.o 00:04:02.664 CXX test/cpp_headers/bdev_module.o 00:04:02.664 CXX test/cpp_headers/bit_array.o 00:04:02.664 CXX test/cpp_headers/blob_bdev.o 00:04:02.664 CXX test/cpp_headers/bit_pool.o 00:04:02.664 CXX test/cpp_headers/blobfs_bdev.o 00:04:02.664 CXX test/cpp_headers/config.o 00:04:02.664 CXX test/cpp_headers/blobfs.o 00:04:02.664 CXX test/cpp_headers/blob.o 00:04:02.664 CC app/nvmf_tgt/nvmf_main.o 00:04:02.664 CXX test/cpp_headers/conf.o 00:04:02.664 CXX test/cpp_headers/crc16.o 00:04:02.664 CXX test/cpp_headers/crc32.o 00:04:02.664 CXX test/cpp_headers/cpuset.o 00:04:02.664 CXX test/cpp_headers/crc64.o 00:04:02.664 CXX test/cpp_headers/dif.o 00:04:02.664 CXX test/cpp_headers/env_dpdk.o 00:04:02.665 CXX test/cpp_headers/endian.o 00:04:02.665 CXX test/cpp_headers/dma.o 00:04:02.665 CXX test/cpp_headers/event.o 00:04:02.665 CXX test/cpp_headers/env.o 00:04:02.665 CXX test/cpp_headers/fd.o 00:04:02.665 CXX test/cpp_headers/fd_group.o 00:04:02.665 CXX test/cpp_headers/file.o 00:04:02.665 CXX test/cpp_headers/fsdev.o 00:04:02.665 CXX test/cpp_headers/fsdev_module.o 00:04:02.665 CXX test/cpp_headers/ftl.o 00:04:02.665 CXX test/cpp_headers/hexlify.o 00:04:02.665 CXX test/cpp_headers/fuse_dispatcher.o 00:04:02.665 CXX test/cpp_headers/gpt_spec.o 00:04:02.665 CXX test/cpp_headers/histogram_data.o 00:04:02.665 CXX test/cpp_headers/idxd.o 00:04:02.665 CXX test/cpp_headers/init.o 00:04:02.665 CXX test/cpp_headers/idxd_spec.o 00:04:02.665 CXX test/cpp_headers/ioat_spec.o 00:04:02.665 CXX test/cpp_headers/ioat.o 00:04:02.665 CXX test/cpp_headers/json.o 00:04:02.665 CXX test/cpp_headers/keyring.o 00:04:02.665 CXX test/cpp_headers/iscsi_spec.o 00:04:02.665 CC test/app/jsoncat/jsoncat.o 00:04:02.665 CXX test/cpp_headers/jsonrpc.o 00:04:02.665 CXX test/cpp_headers/likely.o 00:04:02.665 CXX test/cpp_headers/log.o 00:04:02.665 CXX test/cpp_headers/keyring_module.o 00:04:02.665 CXX test/cpp_headers/md5.o 00:04:02.665 CXX test/cpp_headers/lvol.o 00:04:02.665 CXX test/cpp_headers/memory.o 00:04:02.665 CXX test/cpp_headers/mmio.o 00:04:02.665 CXX test/cpp_headers/nbd.o 00:04:02.665 CXX test/cpp_headers/notify.o 00:04:02.665 CXX test/cpp_headers/net.o 00:04:02.665 CXX test/cpp_headers/nvme.o 00:04:02.665 CXX test/cpp_headers/nvme_intel.o 00:04:02.665 CXX test/cpp_headers/nvme_ocssd.o 00:04:02.665 CXX test/cpp_headers/nvme_spec.o 00:04:02.665 CXX test/cpp_headers/nvme_zns.o 00:04:02.665 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:02.665 CXX test/cpp_headers/nvmf_cmd.o 00:04:02.665 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:02.665 CXX test/cpp_headers/nvmf.o 00:04:02.665 CC examples/ioat/perf/perf.o 00:04:02.665 CXX test/cpp_headers/nvmf_spec.o 00:04:02.665 CC test/app/stub/stub.o 00:04:02.665 CC test/app/histogram_perf/histogram_perf.o 00:04:02.665 CC examples/util/zipf/zipf.o 00:04:02.665 CC test/env/memory/memory_ut.o 00:04:02.665 CC examples/ioat/verify/verify.o 00:04:02.665 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:02.665 CC test/app/bdev_svc/bdev_svc.o 00:04:02.665 CC test/env/vtophys/vtophys.o 00:04:02.665 CC test/thread/poller_perf/poller_perf.o 00:04:02.665 CC test/dma/test_dma/test_dma.o 00:04:02.665 CC app/fio/nvme/fio_plugin.o 00:04:02.665 CC test/env/pci/pci_ut.o 00:04:02.665 CXX test/cpp_headers/nvmf_transport.o 00:04:02.665 CC app/fio/bdev/fio_plugin.o 00:04:02.934 LINK spdk_lspci 00:04:02.934 LINK spdk_nvme_discover 00:04:02.934 LINK interrupt_tgt 00:04:02.934 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:02.934 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:03.196 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.196 LINK rpc_client_test 00:04:03.196 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.196 LINK iscsi_tgt 00:04:03.196 LINK zipf 00:04:03.196 LINK jsoncat 00:04:03.196 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:03.196 CXX test/cpp_headers/opal.o 00:04:03.196 CXX test/cpp_headers/opal_spec.o 00:04:03.196 CXX test/cpp_headers/pci_ids.o 00:04:03.196 CXX test/cpp_headers/pipe.o 00:04:03.196 CXX test/cpp_headers/queue.o 00:04:03.196 CXX test/cpp_headers/reduce.o 00:04:03.196 CXX test/cpp_headers/rpc.o 00:04:03.196 CXX test/cpp_headers/scheduler.o 00:04:03.196 CXX test/cpp_headers/scsi_spec.o 00:04:03.196 CXX test/cpp_headers/scsi.o 00:04:03.196 CXX test/cpp_headers/sock.o 00:04:03.196 CXX test/cpp_headers/stdinc.o 00:04:03.196 LINK spdk_trace_record 00:04:03.196 LINK nvmf_tgt 00:04:03.196 CXX test/cpp_headers/string.o 00:04:03.196 CXX test/cpp_headers/thread.o 00:04:03.196 CXX test/cpp_headers/trace.o 00:04:03.196 CXX test/cpp_headers/trace_parser.o 00:04:03.196 CXX test/cpp_headers/tree.o 00:04:03.196 CXX test/cpp_headers/ublk.o 00:04:03.196 LINK poller_perf 00:04:03.196 LINK histogram_perf 00:04:03.196 CXX test/cpp_headers/uuid.o 00:04:03.196 CXX test/cpp_headers/util.o 00:04:03.196 CXX test/cpp_headers/version.o 00:04:03.196 CXX test/cpp_headers/vfio_user_pci.o 00:04:03.196 LINK vtophys 00:04:03.196 CXX test/cpp_headers/vfio_user_spec.o 00:04:03.196 CXX test/cpp_headers/vhost.o 00:04:03.196 CXX test/cpp_headers/xor.o 00:04:03.196 CXX test/cpp_headers/vmd.o 00:04:03.196 CXX test/cpp_headers/zipf.o 00:04:03.455 LINK stub 00:04:03.455 LINK spdk_tgt 00:04:03.455 LINK env_dpdk_post_init 00:04:03.455 LINK bdev_svc 00:04:03.455 LINK ioat_perf 00:04:03.455 LINK verify 00:04:03.455 LINK spdk_dd 00:04:03.455 LINK pci_ut 00:04:03.455 LINK spdk_trace 00:04:03.713 LINK test_dma 00:04:03.713 CC test/event/reactor_perf/reactor_perf.o 00:04:03.713 LINK nvme_fuzz 00:04:03.713 CC test/event/reactor/reactor.o 00:04:03.713 LINK spdk_bdev 00:04:03.713 CC examples/vmd/led/led.o 00:04:03.713 CC examples/sock/hello_world/hello_sock.o 00:04:03.713 CC examples/vmd/lsvmd/lsvmd.o 00:04:03.713 CC examples/idxd/perf/perf.o 00:04:03.713 CC test/event/event_perf/event_perf.o 00:04:03.713 LINK spdk_nvme_identify 00:04:03.713 CC test/event/app_repeat/app_repeat.o 00:04:03.713 CC examples/thread/thread/thread_ex.o 00:04:03.713 CC test/event/scheduler/scheduler.o 00:04:03.713 LINK spdk_nvme_perf 00:04:03.713 LINK vhost_fuzz 00:04:03.713 LINK spdk_nvme 00:04:03.713 LINK reactor_perf 00:04:03.713 LINK reactor 00:04:03.713 LINK lsvmd 00:04:03.713 LINK led 00:04:03.973 LINK event_perf 00:04:03.973 LINK spdk_top 00:04:03.973 LINK mem_callbacks 00:04:03.973 LINK app_repeat 00:04:03.973 CC app/vhost/vhost.o 00:04:03.973 LINK hello_sock 00:04:03.973 LINK scheduler 00:04:03.973 LINK idxd_perf 00:04:03.973 LINK thread 00:04:04.231 CC test/nvme/reset/reset.o 00:04:04.231 CC test/nvme/sgl/sgl.o 00:04:04.231 CC test/nvme/connect_stress/connect_stress.o 00:04:04.231 CC test/nvme/cuse/cuse.o 00:04:04.231 CC test/nvme/reserve/reserve.o 00:04:04.231 CC test/nvme/err_injection/err_injection.o 00:04:04.231 CC test/nvme/aer/aer.o 00:04:04.231 CC test/nvme/startup/startup.o 00:04:04.231 CC test/nvme/fdp/fdp.o 00:04:04.231 CC test/nvme/boot_partition/boot_partition.o 00:04:04.231 CC test/nvme/compliance/nvme_compliance.o 00:04:04.231 CC test/nvme/simple_copy/simple_copy.o 00:04:04.231 CC test/nvme/e2edp/nvme_dp.o 00:04:04.231 CC test/nvme/overhead/overhead.o 00:04:04.231 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:04.231 LINK vhost 00:04:04.231 CC test/nvme/fused_ordering/fused_ordering.o 00:04:04.231 CC test/blobfs/mkfs/mkfs.o 00:04:04.231 CC test/accel/dif/dif.o 00:04:04.231 LINK memory_ut 00:04:04.231 CC test/lvol/esnap/esnap.o 00:04:04.231 LINK startup 00:04:04.231 CC examples/nvme/hello_world/hello_world.o 00:04:04.231 LINK boot_partition 00:04:04.231 CC examples/nvme/abort/abort.o 00:04:04.231 LINK err_injection 00:04:04.231 LINK doorbell_aers 00:04:04.490 CC examples/nvme/reconnect/reconnect.o 00:04:04.490 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:04.490 CC examples/nvme/arbitration/arbitration.o 00:04:04.490 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:04.490 LINK connect_stress 00:04:04.490 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:04.490 LINK reserve 00:04:04.490 CC examples/nvme/hotplug/hotplug.o 00:04:04.490 LINK fused_ordering 00:04:04.490 LINK mkfs 00:04:04.490 LINK reset 00:04:04.490 LINK simple_copy 00:04:04.490 LINK sgl 00:04:04.490 LINK nvme_dp 00:04:04.490 LINK overhead 00:04:04.490 LINK aer 00:04:04.490 LINK nvme_compliance 00:04:04.490 LINK fdp 00:04:04.490 CC examples/accel/perf/accel_perf.o 00:04:04.490 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:04.490 CC examples/blob/hello_world/hello_blob.o 00:04:04.490 CC examples/blob/cli/blobcli.o 00:04:04.490 LINK pmr_persistence 00:04:04.490 LINK hello_world 00:04:04.490 LINK cmb_copy 00:04:04.749 LINK hotplug 00:04:04.749 LINK reconnect 00:04:04.749 LINK arbitration 00:04:04.749 LINK abort 00:04:04.749 LINK iscsi_fuzz 00:04:04.749 LINK dif 00:04:04.749 LINK hello_blob 00:04:04.749 LINK hello_fsdev 00:04:04.749 LINK nvme_manage 00:04:05.007 LINK accel_perf 00:04:05.007 LINK blobcli 00:04:05.266 LINK cuse 00:04:05.266 CC test/bdev/bdevio/bdevio.o 00:04:05.266 CC examples/bdev/hello_world/hello_bdev.o 00:04:05.266 CC examples/bdev/bdevperf/bdevperf.o 00:04:05.524 LINK bdevio 00:04:05.524 LINK hello_bdev 00:04:06.091 LINK bdevperf 00:04:06.658 CC examples/nvmf/nvmf/nvmf.o 00:04:06.658 LINK nvmf 00:04:08.038 LINK esnap 00:04:08.038 00:04:08.038 real 0m55.298s 00:04:08.038 user 8m15.711s 00:04:08.038 sys 3m37.197s 00:04:08.038 16:29:12 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:08.038 16:29:12 make -- common/autotest_common.sh@10 -- $ set +x 00:04:08.038 ************************************ 00:04:08.038 END TEST make 00:04:08.038 ************************************ 00:04:08.038 16:29:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:08.038 16:29:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:08.038 16:29:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:08.038 16:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.038 16:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:08.038 16:29:12 -- pm/common@44 -- $ pid=270905 00:04:08.038 16:29:12 -- pm/common@50 -- $ kill -TERM 270905 00:04:08.038 16:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.038 16:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:08.038 16:29:12 -- pm/common@44 -- $ pid=270907 00:04:08.038 16:29:12 -- pm/common@50 -- $ kill -TERM 270907 00:04:08.038 16:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.038 16:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:08.038 16:29:12 -- pm/common@44 -- $ pid=270909 00:04:08.038 16:29:12 -- pm/common@50 -- $ kill -TERM 270909 00:04:08.038 16:29:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.038 16:29:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:08.039 16:29:12 -- pm/common@44 -- $ pid=270932 00:04:08.039 16:29:12 -- pm/common@50 -- $ sudo -E kill -TERM 270932 00:04:08.299 16:29:12 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.299 16:29:12 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.299 16:29:12 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.299 16:29:12 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.299 16:29:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.299 16:29:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.299 16:29:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.299 16:29:12 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.299 16:29:12 -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.299 16:29:12 -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.299 16:29:12 -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.299 16:29:12 -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.299 16:29:12 -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.299 16:29:12 -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.299 16:29:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.299 16:29:12 -- scripts/common.sh@344 -- # case "$op" in 00:04:08.299 16:29:12 -- scripts/common.sh@345 -- # : 1 00:04:08.299 16:29:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.299 16:29:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.299 16:29:12 -- scripts/common.sh@365 -- # decimal 1 00:04:08.299 16:29:12 -- scripts/common.sh@353 -- # local d=1 00:04:08.299 16:29:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.299 16:29:12 -- scripts/common.sh@355 -- # echo 1 00:04:08.299 16:29:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.299 16:29:12 -- scripts/common.sh@366 -- # decimal 2 00:04:08.299 16:29:12 -- scripts/common.sh@353 -- # local d=2 00:04:08.299 16:29:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.299 16:29:12 -- scripts/common.sh@355 -- # echo 2 00:04:08.299 16:29:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.299 16:29:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.299 16:29:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.299 16:29:12 -- scripts/common.sh@368 -- # return 0 00:04:08.299 16:29:12 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.299 16:29:12 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.299 --rc genhtml_branch_coverage=1 00:04:08.299 --rc genhtml_function_coverage=1 00:04:08.299 --rc genhtml_legend=1 00:04:08.299 --rc geninfo_all_blocks=1 00:04:08.299 --rc geninfo_unexecuted_blocks=1 00:04:08.299 00:04:08.299 ' 00:04:08.299 16:29:12 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.299 --rc genhtml_branch_coverage=1 00:04:08.299 --rc genhtml_function_coverage=1 00:04:08.299 --rc genhtml_legend=1 00:04:08.299 --rc geninfo_all_blocks=1 00:04:08.299 --rc geninfo_unexecuted_blocks=1 00:04:08.299 00:04:08.299 ' 00:04:08.299 16:29:12 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.299 --rc genhtml_branch_coverage=1 00:04:08.299 --rc genhtml_function_coverage=1 00:04:08.299 --rc genhtml_legend=1 00:04:08.299 --rc geninfo_all_blocks=1 00:04:08.299 --rc geninfo_unexecuted_blocks=1 00:04:08.299 00:04:08.299 ' 00:04:08.299 16:29:12 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.299 --rc genhtml_branch_coverage=1 00:04:08.299 --rc genhtml_function_coverage=1 00:04:08.299 --rc genhtml_legend=1 00:04:08.299 --rc geninfo_all_blocks=1 00:04:08.299 --rc geninfo_unexecuted_blocks=1 00:04:08.299 00:04:08.299 ' 00:04:08.299 16:29:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:08.299 16:29:12 -- nvmf/common.sh@7 -- # uname -s 00:04:08.299 16:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.299 16:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.299 16:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.299 16:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.299 16:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.299 16:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.299 16:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.299 16:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.299 16:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.299 16:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.299 16:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:08.299 16:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:08.299 16:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.299 16:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.299 16:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:08.299 16:29:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.299 16:29:12 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:08.299 16:29:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:08.299 16:29:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.299 16:29:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.299 16:29:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.299 16:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.299 16:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.299 16:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.299 16:29:12 -- paths/export.sh@5 -- # export PATH 00:04:08.299 16:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.299 16:29:12 -- nvmf/common.sh@51 -- # : 0 00:04:08.299 16:29:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:08.300 16:29:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:08.300 16:29:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.300 16:29:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.300 16:29:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.300 16:29:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:08.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:08.300 16:29:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:08.300 16:29:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:08.300 16:29:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:08.300 16:29:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.300 16:29:12 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.300 16:29:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.300 16:29:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.300 16:29:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:08.300 16:29:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.300 16:29:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:08.300 16:29:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.300 16:29:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.300 16:29:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.300 16:29:12 -- spdk/autotest.sh@48 -- # udevadm_pid=333173 00:04:08.300 16:29:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:08.300 16:29:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.300 16:29:12 -- pm/common@17 -- # local monitor 00:04:08.300 16:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.300 16:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.300 16:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.300 16:29:12 -- pm/common@21 -- # date +%s 00:04:08.300 16:29:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.300 16:29:12 -- pm/common@21 -- # date +%s 00:04:08.300 16:29:12 -- pm/common@25 -- # sleep 1 00:04:08.300 16:29:12 -- pm/common@21 -- # date +%s 00:04:08.300 16:29:12 -- pm/common@21 -- # date +%s 00:04:08.300 16:29:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728916152 00:04:08.300 16:29:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728916152 00:04:08.300 16:29:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728916152 00:04:08.300 16:29:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728916152 00:04:08.300 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728916152_collect-cpu-load.pm.log 00:04:08.300 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728916152_collect-vmstat.pm.log 00:04:08.300 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728916152_collect-cpu-temp.pm.log 00:04:08.560 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728916152_collect-bmc-pm.bmc.pm.log 00:04:09.499 16:29:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.499 16:29:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.499 16:29:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.499 16:29:13 -- common/autotest_common.sh@10 -- # set +x 00:04:09.499 16:29:13 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.499 16:29:13 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:09.499 16:29:13 -- common/autotest_common.sh@10 -- # set +x 00:04:09.499 16:29:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:09.499 16:29:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.499 16:29:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.499 16:29:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:09.499 16:29:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:09.499 16:29:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.499 16:29:13 -- common/autotest_common.sh@1455 -- # uname 00:04:09.499 16:29:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:09.499 16:29:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.499 16:29:13 -- common/autotest_common.sh@1475 -- # uname 00:04:09.499 16:29:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:09.499 16:29:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:09.499 16:29:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.499 lcov: LCOV version 1.15 00:04:09.499 16:29:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:21.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.710 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:33.925 16:29:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:33.925 16:29:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.925 16:29:38 -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 16:29:38 -- spdk/autotest.sh@78 -- # rm -f 00:04:33.925 16:29:38 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.463 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:36.463 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:36.463 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:36.723 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:36.982 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:36.982 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:36.982 16:29:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:36.982 16:29:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:36.982 16:29:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:36.982 16:29:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:36.982 16:29:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.982 16:29:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:36.982 16:29:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:36.982 16:29:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.982 16:29:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.982 16:29:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:36.982 16:29:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.982 16:29:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.982 16:29:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:36.982 16:29:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:36.982 16:29:41 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.982 No valid GPT data, bailing 00:04:36.982 16:29:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.982 16:29:41 -- scripts/common.sh@394 -- # pt= 00:04:36.982 16:29:41 -- scripts/common.sh@395 -- # return 1 00:04:36.982 16:29:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.982 1+0 records in 00:04:36.982 1+0 records out 00:04:36.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041156 s, 255 MB/s 00:04:36.982 16:29:41 -- spdk/autotest.sh@105 -- # sync 00:04:36.982 16:29:41 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.982 16:29:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.982 16:29:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:43.553 16:29:46 -- spdk/autotest.sh@111 -- # uname -s 00:04:43.553 16:29:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:43.553 16:29:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:43.553 16:29:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.458 Hugepages 00:04:45.458 node hugesize free / total 00:04:45.458 node0 1048576kB 0 / 0 00:04:45.458 node0 2048kB 0 / 0 00:04:45.458 node1 1048576kB 0 / 0 00:04:45.458 node1 2048kB 0 / 0 00:04:45.458 00:04:45.458 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.458 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:45.458 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:45.458 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:45.458 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:45.458 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:45.458 16:29:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:45.458 16:29:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:45.458 16:29:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:45.458 16:29:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.748 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.748 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.687 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.946 16:29:54 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:50.884 16:29:55 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:50.884 16:29:55 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:50.884 16:29:55 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.884 16:29:55 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:50.884 16:29:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:50.884 16:29:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:50.884 16:29:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.884 16:29:55 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.884 16:29:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:50.884 16:29:55 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:50.884 16:29:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:50.884 16:29:55 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.176 Waiting for block devices as requested 00:04:54.176 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:54.176 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:54.176 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.176 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.176 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:54.176 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:54.176 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.176 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.435 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:54.435 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:54.435 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.694 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.694 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:54.694 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:54.952 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.952 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.952 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:55.212 16:29:59 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:55.212 16:29:59 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:55.212 16:29:59 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:55.212 16:29:59 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:55.212 16:29:59 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:55.212 16:29:59 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:55.212 16:29:59 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:55.212 16:29:59 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:55.212 16:29:59 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:55.212 16:29:59 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:55.212 16:29:59 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:55.212 16:29:59 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:55.212 16:29:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:55.212 16:29:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:55.212 16:29:59 -- common/autotest_common.sh@1541 -- # continue 00:04:55.212 16:29:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:55.212 16:29:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.212 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:04:55.212 16:29:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:55.212 16:29:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.212 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:04:55.212 16:29:59 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.525 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:58.525 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.463 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.463 16:30:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:59.463 16:30:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.463 16:30:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.463 16:30:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:59.463 16:30:04 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:59.463 16:30:04 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.463 16:30:04 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:59.463 16:30:04 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:59.463 16:30:04 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:59.463 16:30:04 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:59.723 16:30:04 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:59.723 16:30:04 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:59.723 16:30:04 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:59.723 16:30:04 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.723 16:30:04 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.723 16:30:04 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:59.723 16:30:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:59.723 16:30:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:59.723 16:30:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:59.723 16:30:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:59.723 16:30:04 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:59.723 16:30:04 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:59.723 16:30:04 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:59.723 16:30:04 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:59.723 16:30:04 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:59.723 16:30:04 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:59.723 16:30:04 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=347525 00:04:59.723 16:30:04 -- common/autotest_common.sh@1583 -- # waitforlisten 347525 00:04:59.723 16:30:04 -- common/autotest_common.sh@831 -- # '[' -z 347525 ']' 00:04:59.723 16:30:04 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.723 16:30:04 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.723 16:30:04 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.723 16:30:04 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.723 16:30:04 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.723 16:30:04 -- common/autotest_common.sh@10 -- # set +x 00:04:59.723 [2024-10-14 16:30:04.239887] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:04:59.723 [2024-10-14 16:30:04.239935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347525 ] 00:04:59.723 [2024-10-14 16:30:04.308279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.723 [2024-10-14 16:30:04.348025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.982 16:30:04 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.982 16:30:04 -- common/autotest_common.sh@864 -- # return 0 00:04:59.982 16:30:04 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:59.982 16:30:04 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:59.982 16:30:04 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:03.273 nvme0n1 00:05:03.273 16:30:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:03.273 [2024-10-14 16:30:07.755362] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:03.273 request: 00:05:03.273 { 00:05:03.273 "nvme_ctrlr_name": "nvme0", 00:05:03.273 "password": "test", 00:05:03.273 "method": "bdev_nvme_opal_revert", 00:05:03.273 "req_id": 1 00:05:03.273 } 00:05:03.273 Got JSON-RPC error response 00:05:03.273 response: 00:05:03.273 { 00:05:03.273 "code": -32602, 00:05:03.273 "message": "Invalid parameters" 00:05:03.273 } 00:05:03.273 16:30:07 -- common/autotest_common.sh@1589 -- # true 00:05:03.273 16:30:07 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:03.273 16:30:07 -- common/autotest_common.sh@1593 -- # killprocess 347525 00:05:03.273 16:30:07 -- common/autotest_common.sh@950 -- # '[' -z 347525 ']' 00:05:03.273 16:30:07 -- common/autotest_common.sh@954 -- # kill -0 347525 00:05:03.273 16:30:07 -- common/autotest_common.sh@955 -- # uname 00:05:03.273 16:30:07 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:03.273 16:30:07 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 347525 00:05:03.273 16:30:07 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:03.273 16:30:07 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:03.273 16:30:07 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 347525' 00:05:03.273 killing process with pid 347525 00:05:03.273 16:30:07 -- common/autotest_common.sh@969 -- # kill 347525 00:05:03.273 16:30:07 -- common/autotest_common.sh@974 -- # wait 347525 00:05:05.881 16:30:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:05.881 16:30:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:05.881 16:30:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.881 16:30:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.881 16:30:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:05.881 16:30:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.881 16:30:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.881 16:30:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:05.881 16:30:09 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.881 16:30:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.881 16:30:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.881 16:30:09 -- common/autotest_common.sh@10 -- # set +x 00:05:05.881 ************************************ 00:05:05.881 START TEST env 00:05:05.881 ************************************ 00:05:05.881 16:30:09 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.881 * Looking for test storage... 00:05:05.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.881 16:30:10 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.881 16:30:10 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.881 16:30:10 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.881 16:30:10 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.881 16:30:10 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.881 16:30:10 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.881 16:30:10 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.881 16:30:10 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.881 16:30:10 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.881 16:30:10 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.881 16:30:10 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.881 16:30:10 env -- scripts/common.sh@344 -- # case "$op" in 00:05:05.881 16:30:10 env -- scripts/common.sh@345 -- # : 1 00:05:05.881 16:30:10 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.881 16:30:10 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.881 16:30:10 env -- scripts/common.sh@365 -- # decimal 1 00:05:05.881 16:30:10 env -- scripts/common.sh@353 -- # local d=1 00:05:05.881 16:30:10 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.881 16:30:10 env -- scripts/common.sh@355 -- # echo 1 00:05:05.881 16:30:10 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.881 16:30:10 env -- scripts/common.sh@366 -- # decimal 2 00:05:05.881 16:30:10 env -- scripts/common.sh@353 -- # local d=2 00:05:05.881 16:30:10 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.881 16:30:10 env -- scripts/common.sh@355 -- # echo 2 00:05:05.881 16:30:10 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.881 16:30:10 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.881 16:30:10 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.881 16:30:10 env -- scripts/common.sh@368 -- # return 0 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.881 --rc genhtml_branch_coverage=1 00:05:05.881 --rc genhtml_function_coverage=1 00:05:05.881 --rc genhtml_legend=1 00:05:05.881 --rc geninfo_all_blocks=1 00:05:05.881 --rc geninfo_unexecuted_blocks=1 00:05:05.881 00:05:05.881 ' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.881 --rc genhtml_branch_coverage=1 00:05:05.881 --rc genhtml_function_coverage=1 00:05:05.881 --rc genhtml_legend=1 00:05:05.881 --rc geninfo_all_blocks=1 00:05:05.881 --rc geninfo_unexecuted_blocks=1 00:05:05.881 00:05:05.881 ' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.881 --rc genhtml_branch_coverage=1 00:05:05.881 --rc genhtml_function_coverage=1 00:05:05.881 --rc genhtml_legend=1 00:05:05.881 --rc geninfo_all_blocks=1 00:05:05.881 --rc geninfo_unexecuted_blocks=1 00:05:05.881 00:05:05.881 ' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.881 --rc genhtml_branch_coverage=1 00:05:05.881 --rc genhtml_function_coverage=1 00:05:05.881 --rc genhtml_legend=1 00:05:05.881 --rc geninfo_all_blocks=1 00:05:05.881 --rc geninfo_unexecuted_blocks=1 00:05:05.881 00:05:05.881 ' 00:05:05.881 16:30:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.881 16:30:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.881 ************************************ 00:05:05.881 START TEST env_memory 00:05:05.881 ************************************ 00:05:05.881 16:30:10 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.881 00:05:05.881 00:05:05.881 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.881 http://cunit.sourceforge.net/ 00:05:05.881 00:05:05.881 00:05:05.881 Suite: memory 00:05:05.881 Test: alloc and free memory map ...[2024-10-14 16:30:10.194125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.881 passed 00:05:05.881 Test: mem map translation ...[2024-10-14 16:30:10.212960] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.881 [2024-10-14 16:30:10.212975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.881 [2024-10-14 16:30:10.213025] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.881 [2024-10-14 16:30:10.213031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.881 passed 00:05:05.881 Test: mem map registration ...[2024-10-14 16:30:10.251382] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:05.881 [2024-10-14 16:30:10.251397] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:05.881 passed 00:05:05.881 Test: mem map adjacent registrations ...passed 00:05:05.881 00:05:05.881 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.881 suites 1 1 n/a 0 0 00:05:05.881 tests 4 4 4 0 0 00:05:05.881 asserts 152 152 152 0 n/a 00:05:05.881 00:05:05.881 Elapsed time = 0.137 seconds 00:05:05.881 00:05:05.881 real 0m0.146s 00:05:05.881 user 0m0.140s 00:05:05.881 sys 0m0.005s 00:05:05.881 16:30:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.881 16:30:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.881 ************************************ 00:05:05.881 END TEST env_memory 00:05:05.881 ************************************ 00:05:05.881 16:30:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.881 16:30:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.881 16:30:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.881 ************************************ 00:05:05.881 START TEST env_vtophys 00:05:05.881 ************************************ 00:05:05.881 16:30:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.881 EAL: lib.eal log level changed from notice to debug 00:05:05.881 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.881 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.881 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.881 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.881 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.881 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.881 EAL: Detected lcore 6 as core 6 on socket 0 00:05:05.881 EAL: Detected lcore 7 as core 8 on socket 0 00:05:05.881 EAL: Detected lcore 8 as core 9 on socket 0 00:05:05.881 EAL: Detected lcore 9 as core 10 on socket 0 00:05:05.881 EAL: Detected lcore 10 as core 11 on socket 0 00:05:05.881 EAL: Detected lcore 11 as core 12 on socket 0 00:05:05.881 EAL: Detected lcore 12 as core 13 on socket 0 00:05:05.881 EAL: Detected lcore 13 as core 16 on socket 0 00:05:05.881 EAL: Detected lcore 14 as core 17 on socket 0 00:05:05.881 EAL: Detected lcore 15 as core 18 on socket 0 00:05:05.881 EAL: Detected lcore 16 as core 19 on socket 0 00:05:05.881 EAL: Detected lcore 17 as core 20 on socket 0 00:05:05.881 EAL: Detected lcore 18 as core 21 on socket 0 00:05:05.881 EAL: Detected lcore 19 as core 25 on socket 0 00:05:05.881 EAL: Detected lcore 20 as core 26 on socket 0 00:05:05.881 EAL: Detected lcore 21 as core 27 on socket 0 00:05:05.881 EAL: Detected lcore 22 as core 28 on socket 0 00:05:05.881 EAL: Detected lcore 23 as core 29 on socket 0 00:05:05.881 EAL: Detected lcore 24 as core 0 on socket 1 00:05:05.881 EAL: Detected lcore 25 as core 1 on socket 1 00:05:05.881 EAL: Detected lcore 26 as core 2 on socket 1 00:05:05.881 EAL: Detected lcore 27 as core 3 on socket 1 00:05:05.881 EAL: Detected lcore 28 as core 4 on socket 1 00:05:05.882 EAL: Detected lcore 29 as core 5 on socket 1 00:05:05.882 EAL: Detected lcore 30 as core 6 on socket 1 00:05:05.882 EAL: Detected lcore 31 as core 8 on socket 1 00:05:05.882 EAL: Detected lcore 32 as core 10 on socket 1 00:05:05.882 EAL: Detected lcore 33 as core 11 on socket 1 00:05:05.882 EAL: Detected lcore 34 as core 12 on socket 1 00:05:05.882 EAL: Detected lcore 35 as core 13 on socket 1 00:05:05.882 EAL: Detected lcore 36 as core 16 on socket 1 00:05:05.882 EAL: Detected lcore 37 as core 17 on socket 1 00:05:05.882 EAL: Detected lcore 38 as core 18 on socket 1 00:05:05.882 EAL: Detected lcore 39 as core 19 on socket 1 00:05:05.882 EAL: Detected lcore 40 as core 20 on socket 1 00:05:05.882 EAL: Detected lcore 41 as core 21 on socket 1 00:05:05.882 EAL: Detected lcore 42 as core 24 on socket 1 00:05:05.882 EAL: Detected lcore 43 as core 25 on socket 1 00:05:05.882 EAL: Detected lcore 44 as core 26 on socket 1 00:05:05.882 EAL: Detected lcore 45 as core 27 on socket 1 00:05:05.882 EAL: Detected lcore 46 as core 28 on socket 1 00:05:05.882 EAL: Detected lcore 47 as core 29 on socket 1 00:05:05.882 EAL: Detected lcore 48 as core 0 on socket 0 00:05:05.882 EAL: Detected lcore 49 as core 1 on socket 0 00:05:05.882 EAL: Detected lcore 50 as core 2 on socket 0 00:05:05.882 EAL: Detected lcore 51 as core 3 on socket 0 00:05:05.882 EAL: Detected lcore 52 as core 4 on socket 0 00:05:05.882 EAL: Detected lcore 53 as core 5 on socket 0 00:05:05.882 EAL: Detected lcore 54 as core 6 on socket 0 00:05:05.882 EAL: Detected lcore 55 as core 8 on socket 0 00:05:05.882 EAL: Detected lcore 56 as core 9 on socket 0 00:05:05.882 EAL: Detected lcore 57 as core 10 on socket 0 00:05:05.882 EAL: Detected lcore 58 as core 11 on socket 0 00:05:05.882 EAL: Detected lcore 59 as core 12 on socket 0 00:05:05.882 EAL: Detected lcore 60 as core 13 on socket 0 00:05:05.882 EAL: Detected lcore 61 as core 16 on socket 0 00:05:05.882 EAL: Detected lcore 62 as core 17 on socket 0 00:05:05.882 EAL: Detected lcore 63 as core 18 on socket 0 00:05:05.882 EAL: Detected lcore 64 as core 19 on socket 0 00:05:05.882 EAL: Detected lcore 65 as core 20 on socket 0 00:05:05.882 EAL: Detected lcore 66 as core 21 on socket 0 00:05:05.882 EAL: Detected lcore 67 as core 25 on socket 0 00:05:05.882 EAL: Detected lcore 68 as core 26 on socket 0 00:05:05.882 EAL: Detected lcore 69 as core 27 on socket 0 00:05:05.882 EAL: Detected lcore 70 as core 28 on socket 0 00:05:05.882 EAL: Detected lcore 71 as core 29 on socket 0 00:05:05.882 EAL: Detected lcore 72 as core 0 on socket 1 00:05:05.882 EAL: Detected lcore 73 as core 1 on socket 1 00:05:05.882 EAL: Detected lcore 74 as core 2 on socket 1 00:05:05.882 EAL: Detected lcore 75 as core 3 on socket 1 00:05:05.882 EAL: Detected lcore 76 as core 4 on socket 1 00:05:05.882 EAL: Detected lcore 77 as core 5 on socket 1 00:05:05.882 EAL: Detected lcore 78 as core 6 on socket 1 00:05:05.882 EAL: Detected lcore 79 as core 8 on socket 1 00:05:05.882 EAL: Detected lcore 80 as core 10 on socket 1 00:05:05.882 EAL: Detected lcore 81 as core 11 on socket 1 00:05:05.882 EAL: Detected lcore 82 as core 12 on socket 1 00:05:05.882 EAL: Detected lcore 83 as core 13 on socket 1 00:05:05.882 EAL: Detected lcore 84 as core 16 on socket 1 00:05:05.882 EAL: Detected lcore 85 as core 17 on socket 1 00:05:05.882 EAL: Detected lcore 86 as core 18 on socket 1 00:05:05.882 EAL: Detected lcore 87 as core 19 on socket 1 00:05:05.882 EAL: Detected lcore 88 as core 20 on socket 1 00:05:05.882 EAL: Detected lcore 89 as core 21 on socket 1 00:05:05.882 EAL: Detected lcore 90 as core 24 on socket 1 00:05:05.882 EAL: Detected lcore 91 as core 25 on socket 1 00:05:05.882 EAL: Detected lcore 92 as core 26 on socket 1 00:05:05.882 EAL: Detected lcore 93 as core 27 on socket 1 00:05:05.882 EAL: Detected lcore 94 as core 28 on socket 1 00:05:05.882 EAL: Detected lcore 95 as core 29 on socket 1 00:05:05.882 EAL: Maximum logical cores by configuration: 128 00:05:05.882 EAL: Detected CPU lcores: 96 00:05:05.882 EAL: Detected NUMA nodes: 2 00:05:05.882 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:05.882 EAL: Detected shared linkage of DPDK 00:05:05.882 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.882 EAL: Bus pci wants IOVA as 'DC' 00:05:05.882 EAL: Buses did not request a specific IOVA mode. 00:05:05.882 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:05.882 EAL: Selected IOVA mode 'VA' 00:05:05.882 EAL: Probing VFIO support... 00:05:05.882 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.882 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.882 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.882 EAL: VFIO support initialized 00:05:05.882 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.882 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.882 EAL: Setting up physically contiguous memory... 00:05:05.882 EAL: Setting maximum number of open files to 524288 00:05:05.882 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.882 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.882 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.882 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.882 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.882 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.882 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.882 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.882 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.882 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.882 EAL: Hugepages will be freed exactly as allocated. 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: TSC frequency is ~2100000 KHz 00:05:05.882 EAL: Main lcore 0 is ready (tid=7fd4ec2c3a00;cpuset=[0]) 00:05:05.882 EAL: Trying to obtain current memory policy. 00:05:05.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.882 EAL: Restoring previous memory policy: 0 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.882 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.882 00:05:05.882 00:05:05.882 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.882 http://cunit.sourceforge.net/ 00:05:05.882 00:05:05.882 00:05:05.882 Suite: components_suite 00:05:05.882 Test: vtophys_malloc_test ...passed 00:05:05.882 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.882 EAL: Restoring previous memory policy: 4 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.882 EAL: Trying to obtain current memory policy. 00:05:05.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.882 EAL: Restoring previous memory policy: 4 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.882 EAL: Trying to obtain current memory policy. 00:05:05.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.882 EAL: Restoring previous memory policy: 4 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.882 EAL: No shared files mode enabled, IPC is disabled 00:05:05.882 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.882 EAL: Trying to obtain current memory policy. 00:05:05.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.882 EAL: Restoring previous memory policy: 4 00:05:05.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.882 EAL: request: mp_malloc_sync 00:05:05.883 EAL: No shared files mode enabled, IPC is disabled 00:05:05.883 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.883 EAL: request: mp_malloc_sync 00:05:05.883 EAL: No shared files mode enabled, IPC is disabled 00:05:05.883 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.883 EAL: Trying to obtain current memory policy. 00:05:05.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.883 EAL: Restoring previous memory policy: 4 00:05:05.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.883 EAL: request: mp_malloc_sync 00:05:05.883 EAL: No shared files mode enabled, IPC is disabled 00:05:05.883 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.883 EAL: request: mp_malloc_sync 00:05:05.883 EAL: No shared files mode enabled, IPC is disabled 00:05:05.883 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.883 EAL: Trying to obtain current memory policy. 00:05:05.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.883 EAL: Restoring previous memory policy: 4 00:05:05.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.883 EAL: request: mp_malloc_sync 00:05:05.883 EAL: No shared files mode enabled, IPC is disabled 00:05:05.883 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.883 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.883 EAL: request: mp_malloc_sync 00:05:05.883 EAL: No shared files mode enabled, IPC is disabled 00:05:05.883 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.883 EAL: Trying to obtain current memory policy. 00:05:05.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.142 EAL: Restoring previous memory policy: 4 00:05:06.142 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.142 EAL: request: mp_malloc_sync 00:05:06.142 EAL: No shared files mode enabled, IPC is disabled 00:05:06.142 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.142 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.142 EAL: request: mp_malloc_sync 00:05:06.142 EAL: No shared files mode enabled, IPC is disabled 00:05:06.142 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.142 EAL: Trying to obtain current memory policy. 00:05:06.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.142 EAL: Restoring previous memory policy: 4 00:05:06.142 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.142 EAL: request: mp_malloc_sync 00:05:06.142 EAL: No shared files mode enabled, IPC is disabled 00:05:06.142 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.142 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.142 EAL: request: mp_malloc_sync 00:05:06.142 EAL: No shared files mode enabled, IPC is disabled 00:05:06.142 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.142 EAL: Trying to obtain current memory policy. 00:05:06.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.401 EAL: Restoring previous memory policy: 4 00:05:06.401 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.401 EAL: request: mp_malloc_sync 00:05:06.401 EAL: No shared files mode enabled, IPC is disabled 00:05:06.401 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.401 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.401 EAL: request: mp_malloc_sync 00:05:06.401 EAL: No shared files mode enabled, IPC is disabled 00:05:06.401 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.401 EAL: Trying to obtain current memory policy. 00:05:06.401 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.660 EAL: Restoring previous memory policy: 4 00:05:06.660 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.660 EAL: request: mp_malloc_sync 00:05:06.660 EAL: No shared files mode enabled, IPC is disabled 00:05:06.660 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.919 EAL: request: mp_malloc_sync 00:05:06.919 EAL: No shared files mode enabled, IPC is disabled 00:05:06.919 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.919 passed 00:05:06.919 00:05:06.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.919 suites 1 1 n/a 0 0 00:05:06.919 tests 2 2 2 0 0 00:05:06.919 asserts 497 497 497 0 n/a 00:05:06.919 00:05:06.919 Elapsed time = 0.963 seconds 00:05:06.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.919 EAL: request: mp_malloc_sync 00:05:06.919 EAL: No shared files mode enabled, IPC is disabled 00:05:06.919 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.919 EAL: No shared files mode enabled, IPC is disabled 00:05:06.919 EAL: No shared files mode enabled, IPC is disabled 00:05:06.919 EAL: No shared files mode enabled, IPC is disabled 00:05:06.919 00:05:06.919 real 0m1.095s 00:05:06.919 user 0m0.644s 00:05:06.919 sys 0m0.418s 00:05:06.919 16:30:11 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.919 16:30:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.919 ************************************ 00:05:06.919 END TEST env_vtophys 00:05:06.920 ************************************ 00:05:06.920 16:30:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.920 16:30:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.920 16:30:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.920 16:30:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.920 ************************************ 00:05:06.920 START TEST env_pci 00:05:06.920 ************************************ 00:05:06.920 16:30:11 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.920 00:05:06.920 00:05:06.920 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.920 http://cunit.sourceforge.net/ 00:05:06.920 00:05:06.920 00:05:06.920 Suite: pci 00:05:06.920 Test: pci_hook ...[2024-10-14 16:30:11.552879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 349234 has claimed it 00:05:07.179 EAL: Cannot find device (10000:00:01.0) 00:05:07.179 EAL: Failed to attach device on primary process 00:05:07.179 passed 00:05:07.179 00:05:07.179 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.179 suites 1 1 n/a 0 0 00:05:07.179 tests 1 1 1 0 0 00:05:07.179 asserts 25 25 25 0 n/a 00:05:07.179 00:05:07.179 Elapsed time = 0.026 seconds 00:05:07.179 00:05:07.179 real 0m0.046s 00:05:07.179 user 0m0.014s 00:05:07.179 sys 0m0.032s 00:05:07.179 16:30:11 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.179 16:30:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:07.179 ************************************ 00:05:07.179 END TEST env_pci 00:05:07.179 ************************************ 00:05:07.179 16:30:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.179 16:30:11 env -- env/env.sh@15 -- # uname 00:05:07.179 16:30:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.179 16:30:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.179 16:30:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.179 16:30:11 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:07.179 16:30:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.179 16:30:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.179 ************************************ 00:05:07.179 START TEST env_dpdk_post_init 00:05:07.179 ************************************ 00:05:07.179 16:30:11 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.179 EAL: Detected CPU lcores: 96 00:05:07.179 EAL: Detected NUMA nodes: 2 00:05:07.179 EAL: Detected shared linkage of DPDK 00:05:07.179 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.179 EAL: Selected IOVA mode 'VA' 00:05:07.179 EAL: VFIO support initialized 00:05:07.179 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.179 EAL: Using IOMMU type 1 (Type 1) 00:05:07.179 EAL: Ignore mapping IO port bar(1) 00:05:07.179 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:07.179 EAL: Ignore mapping IO port bar(1) 00:05:07.179 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:07.179 EAL: Ignore mapping IO port bar(1) 00:05:07.179 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:07.439 EAL: Ignore mapping IO port bar(1) 00:05:07.439 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:07.439 EAL: Ignore mapping IO port bar(1) 00:05:07.439 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:07.439 EAL: Ignore mapping IO port bar(1) 00:05:07.439 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:07.439 EAL: Ignore mapping IO port bar(1) 00:05:07.439 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:07.439 EAL: Ignore mapping IO port bar(1) 00:05:07.439 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:08.008 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:08.008 EAL: Ignore mapping IO port bar(1) 00:05:08.008 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:08.008 EAL: Ignore mapping IO port bar(1) 00:05:08.008 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:08.267 EAL: Ignore mapping IO port bar(1) 00:05:08.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:08.267 EAL: Ignore mapping IO port bar(1) 00:05:08.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:08.267 EAL: Ignore mapping IO port bar(1) 00:05:08.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:08.267 EAL: Ignore mapping IO port bar(1) 00:05:08.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:08.267 EAL: Ignore mapping IO port bar(1) 00:05:08.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:08.267 EAL: Ignore mapping IO port bar(1) 00:05:08.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:12.457 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:12.457 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:12.457 Starting DPDK initialization... 00:05:12.457 Starting SPDK post initialization... 00:05:12.458 SPDK NVMe probe 00:05:12.458 Attaching to 0000:5e:00.0 00:05:12.458 Attached to 0000:5e:00.0 00:05:12.458 Cleaning up... 00:05:12.458 00:05:12.458 real 0m4.912s 00:05:12.458 user 0m3.458s 00:05:12.458 sys 0m0.522s 00:05:12.458 16:30:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.458 16:30:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.458 ************************************ 00:05:12.458 END TEST env_dpdk_post_init 00:05:12.458 ************************************ 00:05:12.458 16:30:16 env -- env/env.sh@26 -- # uname 00:05:12.458 16:30:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.458 16:30:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.458 16:30:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.458 16:30:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.458 16:30:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.458 ************************************ 00:05:12.458 START TEST env_mem_callbacks 00:05:12.458 ************************************ 00:05:12.458 16:30:16 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.458 EAL: Detected CPU lcores: 96 00:05:12.458 EAL: Detected NUMA nodes: 2 00:05:12.458 EAL: Detected shared linkage of DPDK 00:05:12.458 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.458 EAL: Selected IOVA mode 'VA' 00:05:12.458 EAL: VFIO support initialized 00:05:12.458 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.458 00:05:12.458 00:05:12.458 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.458 http://cunit.sourceforge.net/ 00:05:12.458 00:05:12.458 00:05:12.458 Suite: memory 00:05:12.458 Test: test ... 00:05:12.458 register 0x200000200000 2097152 00:05:12.458 malloc 3145728 00:05:12.458 register 0x200000400000 4194304 00:05:12.458 buf 0x200000500000 len 3145728 PASSED 00:05:12.458 malloc 64 00:05:12.458 buf 0x2000004fff40 len 64 PASSED 00:05:12.458 malloc 4194304 00:05:12.458 register 0x200000800000 6291456 00:05:12.458 buf 0x200000a00000 len 4194304 PASSED 00:05:12.458 free 0x200000500000 3145728 00:05:12.458 free 0x2000004fff40 64 00:05:12.458 unregister 0x200000400000 4194304 PASSED 00:05:12.458 free 0x200000a00000 4194304 00:05:12.458 unregister 0x200000800000 6291456 PASSED 00:05:12.458 malloc 8388608 00:05:12.458 register 0x200000400000 10485760 00:05:12.458 buf 0x200000600000 len 8388608 PASSED 00:05:12.458 free 0x200000600000 8388608 00:05:12.458 unregister 0x200000400000 10485760 PASSED 00:05:12.458 passed 00:05:12.458 00:05:12.458 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.458 suites 1 1 n/a 0 0 00:05:12.458 tests 1 1 1 0 0 00:05:12.458 asserts 15 15 15 0 n/a 00:05:12.458 00:05:12.458 Elapsed time = 0.008 seconds 00:05:12.458 00:05:12.458 real 0m0.050s 00:05:12.458 user 0m0.016s 00:05:12.458 sys 0m0.033s 00:05:12.458 16:30:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.458 16:30:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.458 ************************************ 00:05:12.458 END TEST env_mem_callbacks 00:05:12.458 ************************************ 00:05:12.458 00:05:12.458 real 0m6.764s 00:05:12.458 user 0m4.510s 00:05:12.458 sys 0m1.321s 00:05:12.458 16:30:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.458 16:30:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.458 ************************************ 00:05:12.458 END TEST env 00:05:12.458 ************************************ 00:05:12.458 16:30:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:12.458 16:30:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.458 16:30:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.458 16:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:12.458 ************************************ 00:05:12.458 START TEST rpc 00:05:12.458 ************************************ 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:12.458 * Looking for test storage... 00:05:12.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.458 16:30:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.458 16:30:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.458 16:30:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.458 16:30:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.458 16:30:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.458 16:30:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.458 16:30:16 rpc -- scripts/common.sh@345 -- # : 1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.458 16:30:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.458 16:30:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.458 16:30:16 rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.458 16:30:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.458 16:30:16 rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.458 16:30:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.458 16:30:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.458 16:30:16 rpc -- scripts/common.sh@368 -- # return 0 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.458 --rc genhtml_branch_coverage=1 00:05:12.458 --rc genhtml_function_coverage=1 00:05:12.458 --rc genhtml_legend=1 00:05:12.458 --rc geninfo_all_blocks=1 00:05:12.458 --rc geninfo_unexecuted_blocks=1 00:05:12.458 00:05:12.458 ' 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.458 --rc genhtml_branch_coverage=1 00:05:12.458 --rc genhtml_function_coverage=1 00:05:12.458 --rc genhtml_legend=1 00:05:12.458 --rc geninfo_all_blocks=1 00:05:12.458 --rc geninfo_unexecuted_blocks=1 00:05:12.458 00:05:12.458 ' 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.458 --rc genhtml_branch_coverage=1 00:05:12.458 --rc genhtml_function_coverage=1 00:05:12.458 --rc genhtml_legend=1 00:05:12.458 --rc geninfo_all_blocks=1 00:05:12.458 --rc geninfo_unexecuted_blocks=1 00:05:12.458 00:05:12.458 ' 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.458 --rc genhtml_branch_coverage=1 00:05:12.458 --rc genhtml_function_coverage=1 00:05:12.458 --rc genhtml_legend=1 00:05:12.458 --rc geninfo_all_blocks=1 00:05:12.458 --rc geninfo_unexecuted_blocks=1 00:05:12.458 00:05:12.458 ' 00:05:12.458 16:30:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=350289 00:05:12.458 16:30:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.458 16:30:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:12.458 16:30:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 350289 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 350289 ']' 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.458 16:30:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.458 [2024-10-14 16:30:17.017790] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:12.458 [2024-10-14 16:30:17.017834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350289 ] 00:05:12.458 [2024-10-14 16:30:17.086462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.717 [2024-10-14 16:30:17.127957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.717 [2024-10-14 16:30:17.127989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 350289' to capture a snapshot of events at runtime. 00:05:12.717 [2024-10-14 16:30:17.127996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.717 [2024-10-14 16:30:17.128001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.717 [2024-10-14 16:30:17.128006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid350289 for offline analysis/debug. 00:05:12.717 [2024-10-14 16:30:17.128561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.717 16:30:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.717 16:30:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.718 16:30:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.718 16:30:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.718 16:30:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.718 16:30:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.718 16:30:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.718 16:30:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.718 16:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 ************************************ 00:05:12.977 START TEST rpc_integrity 00:05:12.977 ************************************ 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.977 { 00:05:12.977 "name": "Malloc0", 00:05:12.977 "aliases": [ 00:05:12.977 "600de466-629a-488e-874f-7313d1991dea" 00:05:12.977 ], 00:05:12.977 "product_name": "Malloc disk", 00:05:12.977 "block_size": 512, 00:05:12.977 "num_blocks": 16384, 00:05:12.977 "uuid": "600de466-629a-488e-874f-7313d1991dea", 00:05:12.977 "assigned_rate_limits": { 00:05:12.977 "rw_ios_per_sec": 0, 00:05:12.977 "rw_mbytes_per_sec": 0, 00:05:12.977 "r_mbytes_per_sec": 0, 00:05:12.977 "w_mbytes_per_sec": 0 00:05:12.977 }, 00:05:12.977 "claimed": false, 00:05:12.977 "zoned": false, 00:05:12.977 "supported_io_types": { 00:05:12.977 "read": true, 00:05:12.977 "write": true, 00:05:12.977 "unmap": true, 00:05:12.977 "flush": true, 00:05:12.977 "reset": true, 00:05:12.977 "nvme_admin": false, 00:05:12.977 "nvme_io": false, 00:05:12.977 "nvme_io_md": false, 00:05:12.977 "write_zeroes": true, 00:05:12.977 "zcopy": true, 00:05:12.977 "get_zone_info": false, 00:05:12.977 "zone_management": false, 00:05:12.977 "zone_append": false, 00:05:12.977 "compare": false, 00:05:12.977 "compare_and_write": false, 00:05:12.977 "abort": true, 00:05:12.977 "seek_hole": false, 00:05:12.977 "seek_data": false, 00:05:12.977 "copy": true, 00:05:12.977 "nvme_iov_md": false 00:05:12.977 }, 00:05:12.977 "memory_domains": [ 00:05:12.977 { 00:05:12.977 "dma_device_id": "system", 00:05:12.977 "dma_device_type": 1 00:05:12.977 }, 00:05:12.977 { 00:05:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.977 "dma_device_type": 2 00:05:12.977 } 00:05:12.977 ], 00:05:12.977 "driver_specific": {} 00:05:12.977 } 00:05:12.977 ]' 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 [2024-10-14 16:30:17.497111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:12.977 [2024-10-14 16:30:17.497137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:12.977 [2024-10-14 16:30:17.497148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd16790 00:05:12.977 [2024-10-14 16:30:17.497155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:12.977 [2024-10-14 16:30:17.498228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:12.977 [2024-10-14 16:30:17.498246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:12.977 Passthru0 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.977 { 00:05:12.977 "name": "Malloc0", 00:05:12.977 "aliases": [ 00:05:12.977 "600de466-629a-488e-874f-7313d1991dea" 00:05:12.977 ], 00:05:12.977 "product_name": "Malloc disk", 00:05:12.977 "block_size": 512, 00:05:12.977 "num_blocks": 16384, 00:05:12.977 "uuid": "600de466-629a-488e-874f-7313d1991dea", 00:05:12.977 "assigned_rate_limits": { 00:05:12.977 "rw_ios_per_sec": 0, 00:05:12.977 "rw_mbytes_per_sec": 0, 00:05:12.977 "r_mbytes_per_sec": 0, 00:05:12.977 "w_mbytes_per_sec": 0 00:05:12.977 }, 00:05:12.977 "claimed": true, 00:05:12.977 "claim_type": "exclusive_write", 00:05:12.977 "zoned": false, 00:05:12.977 "supported_io_types": { 00:05:12.977 "read": true, 00:05:12.977 "write": true, 00:05:12.977 "unmap": true, 00:05:12.977 "flush": true, 00:05:12.977 "reset": true, 00:05:12.977 "nvme_admin": false, 00:05:12.977 "nvme_io": false, 00:05:12.977 "nvme_io_md": false, 00:05:12.977 "write_zeroes": true, 00:05:12.977 "zcopy": true, 00:05:12.977 "get_zone_info": false, 00:05:12.977 "zone_management": false, 00:05:12.977 "zone_append": false, 00:05:12.977 "compare": false, 00:05:12.977 "compare_and_write": false, 00:05:12.977 "abort": true, 00:05:12.977 "seek_hole": false, 00:05:12.977 "seek_data": false, 00:05:12.977 "copy": true, 00:05:12.977 "nvme_iov_md": false 00:05:12.977 }, 00:05:12.977 "memory_domains": [ 00:05:12.977 { 00:05:12.977 "dma_device_id": "system", 00:05:12.977 "dma_device_type": 1 00:05:12.977 }, 00:05:12.977 { 00:05:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.977 "dma_device_type": 2 00:05:12.977 } 00:05:12.977 ], 00:05:12.977 "driver_specific": {} 00:05:12.977 }, 00:05:12.977 { 00:05:12.977 "name": "Passthru0", 00:05:12.977 "aliases": [ 00:05:12.977 "74b32cec-0233-55e1-b7b4-f03e16ab5d8e" 00:05:12.977 ], 00:05:12.977 "product_name": "passthru", 00:05:12.977 "block_size": 512, 00:05:12.977 "num_blocks": 16384, 00:05:12.977 "uuid": "74b32cec-0233-55e1-b7b4-f03e16ab5d8e", 00:05:12.977 "assigned_rate_limits": { 00:05:12.977 "rw_ios_per_sec": 0, 00:05:12.977 "rw_mbytes_per_sec": 0, 00:05:12.977 "r_mbytes_per_sec": 0, 00:05:12.977 "w_mbytes_per_sec": 0 00:05:12.977 }, 00:05:12.977 "claimed": false, 00:05:12.977 "zoned": false, 00:05:12.977 "supported_io_types": { 00:05:12.977 "read": true, 00:05:12.977 "write": true, 00:05:12.977 "unmap": true, 00:05:12.977 "flush": true, 00:05:12.977 "reset": true, 00:05:12.977 "nvme_admin": false, 00:05:12.977 "nvme_io": false, 00:05:12.977 "nvme_io_md": false, 00:05:12.977 "write_zeroes": true, 00:05:12.977 "zcopy": true, 00:05:12.977 "get_zone_info": false, 00:05:12.977 "zone_management": false, 00:05:12.977 "zone_append": false, 00:05:12.977 "compare": false, 00:05:12.977 "compare_and_write": false, 00:05:12.977 "abort": true, 00:05:12.977 "seek_hole": false, 00:05:12.977 "seek_data": false, 00:05:12.977 "copy": true, 00:05:12.977 "nvme_iov_md": false 00:05:12.977 }, 00:05:12.977 "memory_domains": [ 00:05:12.977 { 00:05:12.977 "dma_device_id": "system", 00:05:12.977 "dma_device_type": 1 00:05:12.977 }, 00:05:12.977 { 00:05:12.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.977 "dma_device_type": 2 00:05:12.977 } 00:05:12.977 ], 00:05:12.977 "driver_specific": { 00:05:12.977 "passthru": { 00:05:12.977 "name": "Passthru0", 00:05:12.977 "base_bdev_name": "Malloc0" 00:05:12.977 } 00:05:12.977 } 00:05:12.977 } 00:05:12.977 ]' 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.977 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.977 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.978 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:12.978 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.978 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.978 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.978 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.978 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.978 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.978 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.978 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.978 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.236 16:30:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.236 00:05:13.236 real 0m0.274s 00:05:13.236 user 0m0.168s 00:05:13.236 sys 0m0.038s 00:05:13.236 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.236 16:30:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 ************************************ 00:05:13.236 END TEST rpc_integrity 00:05:13.236 ************************************ 00:05:13.236 16:30:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.236 16:30:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.236 16:30:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.236 16:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 ************************************ 00:05:13.236 START TEST rpc_plugins 00:05:13.236 ************************************ 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.236 { 00:05:13.236 "name": "Malloc1", 00:05:13.236 "aliases": [ 00:05:13.236 "067dc696-3039-4dac-9f2e-166ece543688" 00:05:13.236 ], 00:05:13.236 "product_name": "Malloc disk", 00:05:13.236 "block_size": 4096, 00:05:13.236 "num_blocks": 256, 00:05:13.236 "uuid": "067dc696-3039-4dac-9f2e-166ece543688", 00:05:13.236 "assigned_rate_limits": { 00:05:13.236 "rw_ios_per_sec": 0, 00:05:13.236 "rw_mbytes_per_sec": 0, 00:05:13.236 "r_mbytes_per_sec": 0, 00:05:13.236 "w_mbytes_per_sec": 0 00:05:13.236 }, 00:05:13.236 "claimed": false, 00:05:13.236 "zoned": false, 00:05:13.236 "supported_io_types": { 00:05:13.236 "read": true, 00:05:13.236 "write": true, 00:05:13.236 "unmap": true, 00:05:13.236 "flush": true, 00:05:13.236 "reset": true, 00:05:13.236 "nvme_admin": false, 00:05:13.236 "nvme_io": false, 00:05:13.236 "nvme_io_md": false, 00:05:13.236 "write_zeroes": true, 00:05:13.236 "zcopy": true, 00:05:13.236 "get_zone_info": false, 00:05:13.236 "zone_management": false, 00:05:13.236 "zone_append": false, 00:05:13.236 "compare": false, 00:05:13.236 "compare_and_write": false, 00:05:13.236 "abort": true, 00:05:13.236 "seek_hole": false, 00:05:13.236 "seek_data": false, 00:05:13.236 "copy": true, 00:05:13.236 "nvme_iov_md": false 00:05:13.236 }, 00:05:13.236 "memory_domains": [ 00:05:13.236 { 00:05:13.236 "dma_device_id": "system", 00:05:13.236 "dma_device_type": 1 00:05:13.236 }, 00:05:13.236 { 00:05:13.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.236 "dma_device_type": 2 00:05:13.236 } 00:05:13.236 ], 00:05:13.236 "driver_specific": {} 00:05:13.236 } 00:05:13.236 ]' 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.236 16:30:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.236 00:05:13.236 real 0m0.141s 00:05:13.236 user 0m0.093s 00:05:13.236 sys 0m0.016s 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.236 16:30:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.236 ************************************ 00:05:13.236 END TEST rpc_plugins 00:05:13.236 ************************************ 00:05:13.493 16:30:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.493 16:30:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.493 16:30:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.493 16:30:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.493 ************************************ 00:05:13.493 START TEST rpc_trace_cmd_test 00:05:13.493 ************************************ 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.493 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid350289", 00:05:13.493 "tpoint_group_mask": "0x8", 00:05:13.493 "iscsi_conn": { 00:05:13.493 "mask": "0x2", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "scsi": { 00:05:13.493 "mask": "0x4", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "bdev": { 00:05:13.493 "mask": "0x8", 00:05:13.493 "tpoint_mask": "0xffffffffffffffff" 00:05:13.493 }, 00:05:13.493 "nvmf_rdma": { 00:05:13.493 "mask": "0x10", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "nvmf_tcp": { 00:05:13.493 "mask": "0x20", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "ftl": { 00:05:13.493 "mask": "0x40", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "blobfs": { 00:05:13.493 "mask": "0x80", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "dsa": { 00:05:13.493 "mask": "0x200", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "thread": { 00:05:13.493 "mask": "0x400", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "nvme_pcie": { 00:05:13.493 "mask": "0x800", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "iaa": { 00:05:13.493 "mask": "0x1000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "nvme_tcp": { 00:05:13.493 "mask": "0x2000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "bdev_nvme": { 00:05:13.493 "mask": "0x4000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "sock": { 00:05:13.493 "mask": "0x8000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "blob": { 00:05:13.493 "mask": "0x10000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "bdev_raid": { 00:05:13.493 "mask": "0x20000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 }, 00:05:13.493 "scheduler": { 00:05:13.493 "mask": "0x40000", 00:05:13.493 "tpoint_mask": "0x0" 00:05:13.493 } 00:05:13.493 }' 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:13.493 16:30:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.493 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.493 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.493 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.493 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.493 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.493 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.751 16:30:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.751 00:05:13.751 real 0m0.219s 00:05:13.751 user 0m0.183s 00:05:13.751 sys 0m0.025s 00:05:13.751 16:30:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.751 16:30:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.751 ************************************ 00:05:13.751 END TEST rpc_trace_cmd_test 00:05:13.751 ************************************ 00:05:13.751 16:30:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.751 16:30:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.751 16:30:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.751 16:30:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.751 16:30:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.751 16:30:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.751 ************************************ 00:05:13.751 START TEST rpc_daemon_integrity 00:05:13.751 ************************************ 00:05:13.751 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:13.751 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.751 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.752 { 00:05:13.752 "name": "Malloc2", 00:05:13.752 "aliases": [ 00:05:13.752 "b4fe0758-296d-49ed-9f0a-033c46e75167" 00:05:13.752 ], 00:05:13.752 "product_name": "Malloc disk", 00:05:13.752 "block_size": 512, 00:05:13.752 "num_blocks": 16384, 00:05:13.752 "uuid": "b4fe0758-296d-49ed-9f0a-033c46e75167", 00:05:13.752 "assigned_rate_limits": { 00:05:13.752 "rw_ios_per_sec": 0, 00:05:13.752 "rw_mbytes_per_sec": 0, 00:05:13.752 "r_mbytes_per_sec": 0, 00:05:13.752 "w_mbytes_per_sec": 0 00:05:13.752 }, 00:05:13.752 "claimed": false, 00:05:13.752 "zoned": false, 00:05:13.752 "supported_io_types": { 00:05:13.752 "read": true, 00:05:13.752 "write": true, 00:05:13.752 "unmap": true, 00:05:13.752 "flush": true, 00:05:13.752 "reset": true, 00:05:13.752 "nvme_admin": false, 00:05:13.752 "nvme_io": false, 00:05:13.752 "nvme_io_md": false, 00:05:13.752 "write_zeroes": true, 00:05:13.752 "zcopy": true, 00:05:13.752 "get_zone_info": false, 00:05:13.752 "zone_management": false, 00:05:13.752 "zone_append": false, 00:05:13.752 "compare": false, 00:05:13.752 "compare_and_write": false, 00:05:13.752 "abort": true, 00:05:13.752 "seek_hole": false, 00:05:13.752 "seek_data": false, 00:05:13.752 "copy": true, 00:05:13.752 "nvme_iov_md": false 00:05:13.752 }, 00:05:13.752 "memory_domains": [ 00:05:13.752 { 00:05:13.752 "dma_device_id": "system", 00:05:13.752 "dma_device_type": 1 00:05:13.752 }, 00:05:13.752 { 00:05:13.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.752 "dma_device_type": 2 00:05:13.752 } 00:05:13.752 ], 00:05:13.752 "driver_specific": {} 00:05:13.752 } 00:05:13.752 ]' 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.752 [2024-10-14 16:30:18.335374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.752 [2024-10-14 16:30:18.335401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.752 [2024-10-14 16:30:18.335414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd17330 00:05:13.752 [2024-10-14 16:30:18.335420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.752 [2024-10-14 16:30:18.336495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.752 [2024-10-14 16:30:18.336515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.752 Passthru0 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.752 { 00:05:13.752 "name": "Malloc2", 00:05:13.752 "aliases": [ 00:05:13.752 "b4fe0758-296d-49ed-9f0a-033c46e75167" 00:05:13.752 ], 00:05:13.752 "product_name": "Malloc disk", 00:05:13.752 "block_size": 512, 00:05:13.752 "num_blocks": 16384, 00:05:13.752 "uuid": "b4fe0758-296d-49ed-9f0a-033c46e75167", 00:05:13.752 "assigned_rate_limits": { 00:05:13.752 "rw_ios_per_sec": 0, 00:05:13.752 "rw_mbytes_per_sec": 0, 00:05:13.752 "r_mbytes_per_sec": 0, 00:05:13.752 "w_mbytes_per_sec": 0 00:05:13.752 }, 00:05:13.752 "claimed": true, 00:05:13.752 "claim_type": "exclusive_write", 00:05:13.752 "zoned": false, 00:05:13.752 "supported_io_types": { 00:05:13.752 "read": true, 00:05:13.752 "write": true, 00:05:13.752 "unmap": true, 00:05:13.752 "flush": true, 00:05:13.752 "reset": true, 00:05:13.752 "nvme_admin": false, 00:05:13.752 "nvme_io": false, 00:05:13.752 "nvme_io_md": false, 00:05:13.752 "write_zeroes": true, 00:05:13.752 "zcopy": true, 00:05:13.752 "get_zone_info": false, 00:05:13.752 "zone_management": false, 00:05:13.752 "zone_append": false, 00:05:13.752 "compare": false, 00:05:13.752 "compare_and_write": false, 00:05:13.752 "abort": true, 00:05:13.752 "seek_hole": false, 00:05:13.752 "seek_data": false, 00:05:13.752 "copy": true, 00:05:13.752 "nvme_iov_md": false 00:05:13.752 }, 00:05:13.752 "memory_domains": [ 00:05:13.752 { 00:05:13.752 "dma_device_id": "system", 00:05:13.752 "dma_device_type": 1 00:05:13.752 }, 00:05:13.752 { 00:05:13.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.752 "dma_device_type": 2 00:05:13.752 } 00:05:13.752 ], 00:05:13.752 "driver_specific": {} 00:05:13.752 }, 00:05:13.752 { 00:05:13.752 "name": "Passthru0", 00:05:13.752 "aliases": [ 00:05:13.752 "36ae4d12-753b-5288-9ebb-f543170746c2" 00:05:13.752 ], 00:05:13.752 "product_name": "passthru", 00:05:13.752 "block_size": 512, 00:05:13.752 "num_blocks": 16384, 00:05:13.752 "uuid": "36ae4d12-753b-5288-9ebb-f543170746c2", 00:05:13.752 "assigned_rate_limits": { 00:05:13.752 "rw_ios_per_sec": 0, 00:05:13.752 "rw_mbytes_per_sec": 0, 00:05:13.752 "r_mbytes_per_sec": 0, 00:05:13.752 "w_mbytes_per_sec": 0 00:05:13.752 }, 00:05:13.752 "claimed": false, 00:05:13.752 "zoned": false, 00:05:13.752 "supported_io_types": { 00:05:13.752 "read": true, 00:05:13.752 "write": true, 00:05:13.752 "unmap": true, 00:05:13.752 "flush": true, 00:05:13.752 "reset": true, 00:05:13.752 "nvme_admin": false, 00:05:13.752 "nvme_io": false, 00:05:13.752 "nvme_io_md": false, 00:05:13.752 "write_zeroes": true, 00:05:13.752 "zcopy": true, 00:05:13.752 "get_zone_info": false, 00:05:13.752 "zone_management": false, 00:05:13.752 "zone_append": false, 00:05:13.752 "compare": false, 00:05:13.752 "compare_and_write": false, 00:05:13.752 "abort": true, 00:05:13.752 "seek_hole": false, 00:05:13.752 "seek_data": false, 00:05:13.752 "copy": true, 00:05:13.752 "nvme_iov_md": false 00:05:13.752 }, 00:05:13.752 "memory_domains": [ 00:05:13.752 { 00:05:13.752 "dma_device_id": "system", 00:05:13.752 "dma_device_type": 1 00:05:13.752 }, 00:05:13.752 { 00:05:13.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.752 "dma_device_type": 2 00:05:13.752 } 00:05:13.752 ], 00:05:13.752 "driver_specific": { 00:05:13.752 "passthru": { 00:05:13.752 "name": "Passthru0", 00:05:13.752 "base_bdev_name": "Malloc2" 00:05:13.752 } 00:05:13.752 } 00:05:13.752 } 00:05:13.752 ]' 00:05:13.752 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.012 00:05:14.012 real 0m0.268s 00:05:14.012 user 0m0.162s 00:05:14.012 sys 0m0.043s 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.012 16:30:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.012 ************************************ 00:05:14.012 END TEST rpc_daemon_integrity 00:05:14.012 ************************************ 00:05:14.012 16:30:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:14.012 16:30:18 rpc -- rpc/rpc.sh@84 -- # killprocess 350289 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 350289 ']' 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@954 -- # kill -0 350289 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@955 -- # uname 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 350289 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 350289' 00:05:14.012 killing process with pid 350289 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@969 -- # kill 350289 00:05:14.012 16:30:18 rpc -- common/autotest_common.sh@974 -- # wait 350289 00:05:14.271 00:05:14.271 real 0m2.056s 00:05:14.271 user 0m2.611s 00:05:14.271 sys 0m0.693s 00:05:14.271 16:30:18 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.271 16:30:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.271 ************************************ 00:05:14.271 END TEST rpc 00:05:14.271 ************************************ 00:05:14.271 16:30:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.271 16:30:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.271 16:30:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.271 16:30:18 -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 ************************************ 00:05:14.531 START TEST skip_rpc 00:05:14.531 ************************************ 00:05:14.531 16:30:18 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.531 * Looking for test storage... 00:05:14.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.531 16:30:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.531 --rc genhtml_branch_coverage=1 00:05:14.531 --rc genhtml_function_coverage=1 00:05:14.531 --rc genhtml_legend=1 00:05:14.531 --rc geninfo_all_blocks=1 00:05:14.531 --rc geninfo_unexecuted_blocks=1 00:05:14.531 00:05:14.531 ' 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.531 --rc genhtml_branch_coverage=1 00:05:14.531 --rc genhtml_function_coverage=1 00:05:14.531 --rc genhtml_legend=1 00:05:14.531 --rc geninfo_all_blocks=1 00:05:14.531 --rc geninfo_unexecuted_blocks=1 00:05:14.531 00:05:14.531 ' 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.531 --rc genhtml_branch_coverage=1 00:05:14.531 --rc genhtml_function_coverage=1 00:05:14.531 --rc genhtml_legend=1 00:05:14.531 --rc geninfo_all_blocks=1 00:05:14.531 --rc geninfo_unexecuted_blocks=1 00:05:14.531 00:05:14.531 ' 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.531 --rc genhtml_branch_coverage=1 00:05:14.531 --rc genhtml_function_coverage=1 00:05:14.531 --rc genhtml_legend=1 00:05:14.531 --rc geninfo_all_blocks=1 00:05:14.531 --rc geninfo_unexecuted_blocks=1 00:05:14.531 00:05:14.531 ' 00:05:14.531 16:30:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.531 16:30:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.531 16:30:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.531 16:30:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.531 ************************************ 00:05:14.531 START TEST skip_rpc 00:05:14.531 ************************************ 00:05:14.531 16:30:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:14.531 16:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=350739 00:05:14.531 16:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.531 16:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.531 16:30:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.791 [2024-10-14 16:30:19.190311] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:14.791 [2024-10-14 16:30:19.190349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid350739 ] 00:05:14.791 [2024-10-14 16:30:19.257613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.791 [2024-10-14 16:30:19.298148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 350739 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 350739 ']' 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 350739 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 350739 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 350739' 00:05:20.059 killing process with pid 350739 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 350739 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 350739 00:05:20.059 00:05:20.059 real 0m5.361s 00:05:20.059 user 0m5.122s 00:05:20.059 sys 0m0.273s 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.059 16:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.059 ************************************ 00:05:20.059 END TEST skip_rpc 00:05:20.059 ************************************ 00:05:20.059 16:30:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:20.059 16:30:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.059 16:30:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.059 16:30:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.059 ************************************ 00:05:20.059 START TEST skip_rpc_with_json 00:05:20.059 ************************************ 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=351661 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 351661 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 351661 ']' 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.059 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.060 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.060 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.060 [2024-10-14 16:30:24.619035] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:20.060 [2024-10-14 16:30:24.619078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351661 ] 00:05:20.060 [2024-10-14 16:30:24.687457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.319 [2024-10-14 16:30:24.730051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.319 [2024-10-14 16:30:24.944062] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.319 request: 00:05:20.319 { 00:05:20.319 "trtype": "tcp", 00:05:20.319 "method": "nvmf_get_transports", 00:05:20.319 "req_id": 1 00:05:20.319 } 00:05:20.319 Got JSON-RPC error response 00:05:20.319 response: 00:05:20.319 { 00:05:20.319 "code": -19, 00:05:20.319 "message": "No such device" 00:05:20.319 } 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.319 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.579 [2024-10-14 16:30:24.956175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.579 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.579 16:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.579 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.579 16:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.579 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.579 16:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.579 { 00:05:20.579 "subsystems": [ 00:05:20.579 { 00:05:20.579 "subsystem": "fsdev", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "fsdev_set_opts", 00:05:20.579 "params": { 00:05:20.579 "fsdev_io_pool_size": 65535, 00:05:20.579 "fsdev_io_cache_size": 256 00:05:20.579 } 00:05:20.579 } 00:05:20.579 ] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "vfio_user_target", 00:05:20.579 "config": null 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "keyring", 00:05:20.579 "config": [] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "iobuf", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "iobuf_set_options", 00:05:20.579 "params": { 00:05:20.579 "small_pool_count": 8192, 00:05:20.579 "large_pool_count": 1024, 00:05:20.579 "small_bufsize": 8192, 00:05:20.579 "large_bufsize": 135168 00:05:20.579 } 00:05:20.579 } 00:05:20.579 ] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "sock", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "sock_set_default_impl", 00:05:20.579 "params": { 00:05:20.579 "impl_name": "posix" 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "sock_impl_set_options", 00:05:20.579 "params": { 00:05:20.579 "impl_name": "ssl", 00:05:20.579 "recv_buf_size": 4096, 00:05:20.579 "send_buf_size": 4096, 00:05:20.579 "enable_recv_pipe": true, 00:05:20.579 "enable_quickack": false, 00:05:20.579 "enable_placement_id": 0, 00:05:20.579 "enable_zerocopy_send_server": true, 00:05:20.579 "enable_zerocopy_send_client": false, 00:05:20.579 "zerocopy_threshold": 0, 00:05:20.579 "tls_version": 0, 00:05:20.579 "enable_ktls": false 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "sock_impl_set_options", 00:05:20.579 "params": { 00:05:20.579 "impl_name": "posix", 00:05:20.579 "recv_buf_size": 2097152, 00:05:20.579 "send_buf_size": 2097152, 00:05:20.579 "enable_recv_pipe": true, 00:05:20.579 "enable_quickack": false, 00:05:20.579 "enable_placement_id": 0, 00:05:20.579 "enable_zerocopy_send_server": true, 00:05:20.579 "enable_zerocopy_send_client": false, 00:05:20.579 "zerocopy_threshold": 0, 00:05:20.579 "tls_version": 0, 00:05:20.579 "enable_ktls": false 00:05:20.579 } 00:05:20.579 } 00:05:20.579 ] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "vmd", 00:05:20.579 "config": [] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "accel", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "accel_set_options", 00:05:20.579 "params": { 00:05:20.579 "small_cache_size": 128, 00:05:20.579 "large_cache_size": 16, 00:05:20.579 "task_count": 2048, 00:05:20.579 "sequence_count": 2048, 00:05:20.579 "buf_count": 2048 00:05:20.579 } 00:05:20.579 } 00:05:20.579 ] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "bdev", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "bdev_set_options", 00:05:20.579 "params": { 00:05:20.579 "bdev_io_pool_size": 65535, 00:05:20.579 "bdev_io_cache_size": 256, 00:05:20.579 "bdev_auto_examine": true, 00:05:20.579 "iobuf_small_cache_size": 128, 00:05:20.579 "iobuf_large_cache_size": 16 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "bdev_raid_set_options", 00:05:20.579 "params": { 00:05:20.579 "process_window_size_kb": 1024, 00:05:20.579 "process_max_bandwidth_mb_sec": 0 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "bdev_iscsi_set_options", 00:05:20.579 "params": { 00:05:20.579 "timeout_sec": 30 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "bdev_nvme_set_options", 00:05:20.579 "params": { 00:05:20.579 "action_on_timeout": "none", 00:05:20.579 "timeout_us": 0, 00:05:20.579 "timeout_admin_us": 0, 00:05:20.579 "keep_alive_timeout_ms": 10000, 00:05:20.579 "arbitration_burst": 0, 00:05:20.579 "low_priority_weight": 0, 00:05:20.579 "medium_priority_weight": 0, 00:05:20.579 "high_priority_weight": 0, 00:05:20.579 "nvme_adminq_poll_period_us": 10000, 00:05:20.579 "nvme_ioq_poll_period_us": 0, 00:05:20.579 "io_queue_requests": 0, 00:05:20.579 "delay_cmd_submit": true, 00:05:20.579 "transport_retry_count": 4, 00:05:20.579 "bdev_retry_count": 3, 00:05:20.579 "transport_ack_timeout": 0, 00:05:20.579 "ctrlr_loss_timeout_sec": 0, 00:05:20.579 "reconnect_delay_sec": 0, 00:05:20.579 "fast_io_fail_timeout_sec": 0, 00:05:20.579 "disable_auto_failback": false, 00:05:20.579 "generate_uuids": false, 00:05:20.579 "transport_tos": 0, 00:05:20.579 "nvme_error_stat": false, 00:05:20.579 "rdma_srq_size": 0, 00:05:20.579 "io_path_stat": false, 00:05:20.579 "allow_accel_sequence": false, 00:05:20.579 "rdma_max_cq_size": 0, 00:05:20.579 "rdma_cm_event_timeout_ms": 0, 00:05:20.579 "dhchap_digests": [ 00:05:20.579 "sha256", 00:05:20.579 "sha384", 00:05:20.579 "sha512" 00:05:20.579 ], 00:05:20.579 "dhchap_dhgroups": [ 00:05:20.579 "null", 00:05:20.579 "ffdhe2048", 00:05:20.579 "ffdhe3072", 00:05:20.579 "ffdhe4096", 00:05:20.579 "ffdhe6144", 00:05:20.579 "ffdhe8192" 00:05:20.579 ] 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "bdev_nvme_set_hotplug", 00:05:20.579 "params": { 00:05:20.579 "period_us": 100000, 00:05:20.579 "enable": false 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "bdev_wait_for_examine" 00:05:20.579 } 00:05:20.579 ] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "scsi", 00:05:20.579 "config": null 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "scheduler", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "framework_set_scheduler", 00:05:20.579 "params": { 00:05:20.579 "name": "static" 00:05:20.579 } 00:05:20.579 } 00:05:20.579 ] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "vhost_scsi", 00:05:20.579 "config": [] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "vhost_blk", 00:05:20.579 "config": [] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "ublk", 00:05:20.579 "config": [] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "nbd", 00:05:20.579 "config": [] 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "subsystem": "nvmf", 00:05:20.579 "config": [ 00:05:20.579 { 00:05:20.579 "method": "nvmf_set_config", 00:05:20.579 "params": { 00:05:20.579 "discovery_filter": "match_any", 00:05:20.579 "admin_cmd_passthru": { 00:05:20.579 "identify_ctrlr": false 00:05:20.579 }, 00:05:20.579 "dhchap_digests": [ 00:05:20.579 "sha256", 00:05:20.579 "sha384", 00:05:20.579 "sha512" 00:05:20.579 ], 00:05:20.579 "dhchap_dhgroups": [ 00:05:20.579 "null", 00:05:20.579 "ffdhe2048", 00:05:20.579 "ffdhe3072", 00:05:20.579 "ffdhe4096", 00:05:20.579 "ffdhe6144", 00:05:20.579 "ffdhe8192" 00:05:20.579 ] 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.579 "method": "nvmf_set_max_subsystems", 00:05:20.579 "params": { 00:05:20.579 "max_subsystems": 1024 00:05:20.579 } 00:05:20.579 }, 00:05:20.579 { 00:05:20.580 "method": "nvmf_set_crdt", 00:05:20.580 "params": { 00:05:20.580 "crdt1": 0, 00:05:20.580 "crdt2": 0, 00:05:20.580 "crdt3": 0 00:05:20.580 } 00:05:20.580 }, 00:05:20.580 { 00:05:20.580 "method": "nvmf_create_transport", 00:05:20.580 "params": { 00:05:20.580 "trtype": "TCP", 00:05:20.580 "max_queue_depth": 128, 00:05:20.580 "max_io_qpairs_per_ctrlr": 127, 00:05:20.580 "in_capsule_data_size": 4096, 00:05:20.580 "max_io_size": 131072, 00:05:20.580 "io_unit_size": 131072, 00:05:20.580 "max_aq_depth": 128, 00:05:20.580 "num_shared_buffers": 511, 00:05:20.580 "buf_cache_size": 4294967295, 00:05:20.580 "dif_insert_or_strip": false, 00:05:20.580 "zcopy": false, 00:05:20.580 "c2h_success": true, 00:05:20.580 "sock_priority": 0, 00:05:20.580 "abort_timeout_sec": 1, 00:05:20.580 "ack_timeout": 0, 00:05:20.580 "data_wr_pool_size": 0 00:05:20.580 } 00:05:20.580 } 00:05:20.580 ] 00:05:20.580 }, 00:05:20.580 { 00:05:20.580 "subsystem": "iscsi", 00:05:20.580 "config": [ 00:05:20.580 { 00:05:20.580 "method": "iscsi_set_options", 00:05:20.580 "params": { 00:05:20.580 "node_base": "iqn.2016-06.io.spdk", 00:05:20.580 "max_sessions": 128, 00:05:20.580 "max_connections_per_session": 2, 00:05:20.580 "max_queue_depth": 64, 00:05:20.580 "default_time2wait": 2, 00:05:20.580 "default_time2retain": 20, 00:05:20.580 "first_burst_length": 8192, 00:05:20.580 "immediate_data": true, 00:05:20.580 "allow_duplicated_isid": false, 00:05:20.580 "error_recovery_level": 0, 00:05:20.580 "nop_timeout": 60, 00:05:20.580 "nop_in_interval": 30, 00:05:20.580 "disable_chap": false, 00:05:20.580 "require_chap": false, 00:05:20.580 "mutual_chap": false, 00:05:20.580 "chap_group": 0, 00:05:20.580 "max_large_datain_per_connection": 64, 00:05:20.580 "max_r2t_per_connection": 4, 00:05:20.580 "pdu_pool_size": 36864, 00:05:20.580 "immediate_data_pool_size": 16384, 00:05:20.580 "data_out_pool_size": 2048 00:05:20.580 } 00:05:20.580 } 00:05:20.580 ] 00:05:20.580 } 00:05:20.580 ] 00:05:20.580 } 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 351661 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 351661 ']' 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 351661 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 351661 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 351661' 00:05:20.580 killing process with pid 351661 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 351661 00:05:20.580 16:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 351661 00:05:21.147 16:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=351888 00:05:21.147 16:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.147 16:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 351888 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 351888 ']' 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 351888 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 351888 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 351888' 00:05:26.417 killing process with pid 351888 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 351888 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 351888 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.417 00:05:26.417 real 0m6.277s 00:05:26.417 user 0m5.959s 00:05:26.417 sys 0m0.613s 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.417 ************************************ 00:05:26.417 END TEST skip_rpc_with_json 00:05:26.417 ************************************ 00:05:26.417 16:30:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.417 16:30:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.417 16:30:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.417 16:30:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.417 ************************************ 00:05:26.417 START TEST skip_rpc_with_delay 00:05:26.417 ************************************ 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.417 [2024-10-14 16:30:30.969654] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.417 00:05:26.417 real 0m0.068s 00:05:26.417 user 0m0.046s 00:05:26.417 sys 0m0.022s 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.417 16:30:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.417 ************************************ 00:05:26.417 END TEST skip_rpc_with_delay 00:05:26.417 ************************************ 00:05:26.417 16:30:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.417 16:30:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.417 16:30:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.417 16:30:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.417 16:30:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.417 16:30:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.677 ************************************ 00:05:26.677 START TEST exit_on_failed_rpc_init 00:05:26.677 ************************************ 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=352859 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 352859 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 352859 ']' 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.677 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.677 [2024-10-14 16:30:31.112536] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:26.677 [2024-10-14 16:30:31.112583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352859 ] 00:05:26.677 [2024-10-14 16:30:31.181020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.677 [2024-10-14 16:30:31.221478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.936 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:26.936 [2024-10-14 16:30:31.509179] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:26.936 [2024-10-14 16:30:31.509227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352884 ] 00:05:27.196 [2024-10-14 16:30:31.576464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.196 [2024-10-14 16:30:31.617398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.196 [2024-10-14 16:30:31.617463] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.196 [2024-10-14 16:30:31.617472] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.196 [2024-10-14 16:30:31.617477] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 352859 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 352859 ']' 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 352859 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 352859 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 352859' 00:05:27.196 killing process with pid 352859 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 352859 00:05:27.196 16:30:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 352859 00:05:27.455 00:05:27.455 real 0m0.960s 00:05:27.455 user 0m1.010s 00:05:27.455 sys 0m0.390s 00:05:27.455 16:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.455 16:30:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.455 ************************************ 00:05:27.455 END TEST exit_on_failed_rpc_init 00:05:27.455 ************************************ 00:05:27.455 16:30:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:27.455 00:05:27.455 real 0m13.129s 00:05:27.455 user 0m12.350s 00:05:27.455 sys 0m1.579s 00:05:27.455 16:30:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.455 16:30:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.455 ************************************ 00:05:27.455 END TEST skip_rpc 00:05:27.455 ************************************ 00:05:27.455 16:30:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.455 16:30:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.455 16:30:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.455 16:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.715 ************************************ 00:05:27.715 START TEST rpc_client 00:05:27.715 ************************************ 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:27.715 * Looking for test storage... 00:05:27.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.715 16:30:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.715 --rc genhtml_branch_coverage=1 00:05:27.715 --rc genhtml_function_coverage=1 00:05:27.715 --rc genhtml_legend=1 00:05:27.715 --rc geninfo_all_blocks=1 00:05:27.715 --rc geninfo_unexecuted_blocks=1 00:05:27.715 00:05:27.715 ' 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.715 --rc genhtml_branch_coverage=1 00:05:27.715 --rc genhtml_function_coverage=1 00:05:27.715 --rc genhtml_legend=1 00:05:27.715 --rc geninfo_all_blocks=1 00:05:27.715 --rc geninfo_unexecuted_blocks=1 00:05:27.715 00:05:27.715 ' 00:05:27.715 16:30:32 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.715 --rc genhtml_branch_coverage=1 00:05:27.715 --rc genhtml_function_coverage=1 00:05:27.715 --rc genhtml_legend=1 00:05:27.715 --rc geninfo_all_blocks=1 00:05:27.716 --rc geninfo_unexecuted_blocks=1 00:05:27.716 00:05:27.716 ' 00:05:27.716 16:30:32 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.716 --rc genhtml_branch_coverage=1 00:05:27.716 --rc genhtml_function_coverage=1 00:05:27.716 --rc genhtml_legend=1 00:05:27.716 --rc geninfo_all_blocks=1 00:05:27.716 --rc geninfo_unexecuted_blocks=1 00:05:27.716 00:05:27.716 ' 00:05:27.716 16:30:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:27.716 OK 00:05:27.716 16:30:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.716 00:05:27.716 real 0m0.200s 00:05:27.716 user 0m0.124s 00:05:27.716 sys 0m0.090s 00:05:27.716 16:30:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.716 16:30:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.716 ************************************ 00:05:27.716 END TEST rpc_client 00:05:27.716 ************************************ 00:05:27.975 16:30:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.975 16:30:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.975 16:30:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.975 16:30:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.975 ************************************ 00:05:27.975 START TEST json_config 00:05:27.975 ************************************ 00:05:27.975 16:30:32 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.976 16:30:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.976 16:30:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.976 16:30:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.976 16:30:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.976 16:30:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.976 16:30:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:27.976 16:30:32 json_config -- scripts/common.sh@345 -- # : 1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.976 16:30:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.976 16:30:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@353 -- # local d=1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.976 16:30:32 json_config -- scripts/common.sh@355 -- # echo 1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.976 16:30:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@353 -- # local d=2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.976 16:30:32 json_config -- scripts/common.sh@355 -- # echo 2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.976 16:30:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.976 16:30:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.976 16:30:32 json_config -- scripts/common.sh@368 -- # return 0 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.976 --rc genhtml_branch_coverage=1 00:05:27.976 --rc genhtml_function_coverage=1 00:05:27.976 --rc genhtml_legend=1 00:05:27.976 --rc geninfo_all_blocks=1 00:05:27.976 --rc geninfo_unexecuted_blocks=1 00:05:27.976 00:05:27.976 ' 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.976 --rc genhtml_branch_coverage=1 00:05:27.976 --rc genhtml_function_coverage=1 00:05:27.976 --rc genhtml_legend=1 00:05:27.976 --rc geninfo_all_blocks=1 00:05:27.976 --rc geninfo_unexecuted_blocks=1 00:05:27.976 00:05:27.976 ' 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.976 --rc genhtml_branch_coverage=1 00:05:27.976 --rc genhtml_function_coverage=1 00:05:27.976 --rc genhtml_legend=1 00:05:27.976 --rc geninfo_all_blocks=1 00:05:27.976 --rc geninfo_unexecuted_blocks=1 00:05:27.976 00:05:27.976 ' 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:27.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.976 --rc genhtml_branch_coverage=1 00:05:27.976 --rc genhtml_function_coverage=1 00:05:27.976 --rc genhtml_legend=1 00:05:27.976 --rc geninfo_all_blocks=1 00:05:27.976 --rc geninfo_unexecuted_blocks=1 00:05:27.976 00:05:27.976 ' 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.976 16:30:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.976 16:30:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.976 16:30:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.976 16:30:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.976 16:30:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.976 16:30:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.976 16:30:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.976 16:30:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.976 16:30:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@51 -- # : 0 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.976 16:30:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:27.976 INFO: JSON configuration test init 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.976 16:30:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.976 16:30:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:27.976 16:30:32 json_config -- json_config/common.sh@9 -- # local app=target 00:05:27.976 16:30:32 json_config -- json_config/common.sh@10 -- # shift 00:05:27.977 16:30:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.977 16:30:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.977 16:30:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.977 16:30:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.977 16:30:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.977 16:30:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=353228 00:05:27.977 16:30:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.977 Waiting for target to run... 00:05:27.977 16:30:32 json_config -- json_config/common.sh@25 -- # waitforlisten 353228 /var/tmp/spdk_tgt.sock 00:05:27.977 16:30:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 353228 ']' 00:05:27.977 16:30:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:27.977 16:30:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.977 16:30:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.977 16:30:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.977 16:30:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.977 16:30:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.236 [2024-10-14 16:30:32.643503] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:28.236 [2024-10-14 16:30:32.643551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353228 ] 00:05:28.495 [2024-10-14 16:30:33.090801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.753 [2024-10-14 16:30:33.144823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.012 16:30:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.012 16:30:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:29.012 16:30:33 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.012 00:05:29.012 16:30:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:29.012 16:30:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:29.012 16:30:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.012 16:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.012 16:30:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:29.012 16:30:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:29.012 16:30:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.012 16:30:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.012 16:30:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.012 16:30:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:29.012 16:30:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.297 16:30:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.297 16:30:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:32.297 16:30:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@54 -- # sort 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:32.297 16:30:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.297 16:30:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:32.297 16:30:36 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:32.298 16:30:36 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:32.298 16:30:36 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:32.298 16:30:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.298 16:30:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.298 16:30:36 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:32.298 16:30:36 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:32.298 16:30:36 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:32.298 16:30:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.298 16:30:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.556 MallocForNvmf0 00:05:32.556 16:30:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.556 16:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.815 MallocForNvmf1 00:05:32.815 16:30:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.815 16:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.815 [2024-10-14 16:30:37.412767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.815 16:30:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.815 16:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.072 16:30:37 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.072 16:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.331 16:30:37 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.331 16:30:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.588 16:30:38 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.588 16:30:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.588 [2024-10-14 16:30:38.171090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.588 16:30:38 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:33.588 16:30:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.588 16:30:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.846 16:30:38 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:33.846 16:30:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.846 16:30:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.846 16:30:38 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:33.846 16:30:38 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.846 16:30:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.846 MallocBdevForConfigChangeCheck 00:05:33.846 16:30:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:33.846 16:30:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.846 16:30:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.105 16:30:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:34.105 16:30:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.363 16:30:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:34.363 INFO: shutting down applications... 00:05:34.363 16:30:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:34.363 16:30:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:34.363 16:30:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:34.363 16:30:38 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.891 Calling clear_iscsi_subsystem 00:05:36.891 Calling clear_nvmf_subsystem 00:05:36.891 Calling clear_nbd_subsystem 00:05:36.891 Calling clear_ublk_subsystem 00:05:36.891 Calling clear_vhost_blk_subsystem 00:05:36.891 Calling clear_vhost_scsi_subsystem 00:05:36.891 Calling clear_bdev_subsystem 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@352 -- # break 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:36.891 16:30:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:36.891 16:30:41 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.891 16:30:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.891 16:30:41 json_config -- json_config/common.sh@35 -- # [[ -n 353228 ]] 00:05:36.891 16:30:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 353228 00:05:36.891 16:30:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.891 16:30:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.891 16:30:41 json_config -- json_config/common.sh@41 -- # kill -0 353228 00:05:36.891 16:30:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.459 16:30:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.459 16:30:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.459 16:30:41 json_config -- json_config/common.sh@41 -- # kill -0 353228 00:05:37.459 16:30:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.459 16:30:41 json_config -- json_config/common.sh@43 -- # break 00:05:37.459 16:30:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.459 16:30:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.459 SPDK target shutdown done 00:05:37.459 16:30:41 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:37.459 INFO: relaunching applications... 00:05:37.459 16:30:41 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.459 16:30:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.459 16:30:41 json_config -- json_config/common.sh@10 -- # shift 00:05:37.459 16:30:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.459 16:30:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.459 16:30:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.459 16:30:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.459 16:30:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.459 16:30:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=354964 00:05:37.459 16:30:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.459 Waiting for target to run... 00:05:37.459 16:30:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.459 16:30:41 json_config -- json_config/common.sh@25 -- # waitforlisten 354964 /var/tmp/spdk_tgt.sock 00:05:37.459 16:30:41 json_config -- common/autotest_common.sh@831 -- # '[' -z 354964 ']' 00:05:37.459 16:30:41 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.459 16:30:41 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.459 16:30:41 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.459 16:30:41 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.459 16:30:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.459 [2024-10-14 16:30:41.963142] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:37.459 [2024-10-14 16:30:41.963194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354964 ] 00:05:37.718 [2024-10-14 16:30:42.237735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.718 [2024-10-14 16:30:42.270927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.004 [2024-10-14 16:30:45.298504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.004 [2024-10-14 16:30:45.330845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.004 16:30:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.004 16:30:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:41.004 16:30:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.004 00:05:41.004 16:30:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:41.004 16:30:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.004 INFO: Checking if target configuration is the same... 00:05:41.004 16:30:45 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.004 16:30:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:41.004 16:30:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.004 + '[' 2 -ne 2 ']' 00:05:41.004 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.004 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.004 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.004 +++ basename /dev/fd/62 00:05:41.004 ++ mktemp /tmp/62.XXX 00:05:41.004 + tmp_file_1=/tmp/62.0gy 00:05:41.004 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.004 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.004 + tmp_file_2=/tmp/spdk_tgt_config.json.FS2 00:05:41.004 + ret=0 00:05:41.004 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.263 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.263 + diff -u /tmp/62.0gy /tmp/spdk_tgt_config.json.FS2 00:05:41.263 + echo 'INFO: JSON config files are the same' 00:05:41.263 INFO: JSON config files are the same 00:05:41.263 + rm /tmp/62.0gy /tmp/spdk_tgt_config.json.FS2 00:05:41.263 + exit 0 00:05:41.263 16:30:45 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:41.263 16:30:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:41.263 INFO: changing configuration and checking if this can be detected... 00:05:41.263 16:30:45 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.263 16:30:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:41.524 16:30:45 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.524 16:30:45 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:41.524 16:30:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.524 + '[' 2 -ne 2 ']' 00:05:41.524 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.524 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.524 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.524 +++ basename /dev/fd/62 00:05:41.524 ++ mktemp /tmp/62.XXX 00:05:41.524 + tmp_file_1=/tmp/62.bS6 00:05:41.524 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.524 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.524 + tmp_file_2=/tmp/spdk_tgt_config.json.kIU 00:05:41.524 + ret=0 00:05:41.524 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.782 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:41.782 + diff -u /tmp/62.bS6 /tmp/spdk_tgt_config.json.kIU 00:05:41.782 + ret=1 00:05:41.782 + echo '=== Start of file: /tmp/62.bS6 ===' 00:05:41.782 + cat /tmp/62.bS6 00:05:41.782 + echo '=== End of file: /tmp/62.bS6 ===' 00:05:41.782 + echo '' 00:05:41.783 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kIU ===' 00:05:41.783 + cat /tmp/spdk_tgt_config.json.kIU 00:05:41.783 + echo '=== End of file: /tmp/spdk_tgt_config.json.kIU ===' 00:05:41.783 + echo '' 00:05:41.783 + rm /tmp/62.bS6 /tmp/spdk_tgt_config.json.kIU 00:05:41.783 + exit 1 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:41.783 INFO: configuration change detected. 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@324 -- # [[ -n 354964 ]] 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.783 16:30:46 json_config -- json_config/json_config.sh@330 -- # killprocess 354964 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@950 -- # '[' -z 354964 ']' 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@954 -- # kill -0 354964 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@955 -- # uname 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.783 16:30:46 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 354964 00:05:42.041 16:30:46 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.041 16:30:46 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.041 16:30:46 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 354964' 00:05:42.041 killing process with pid 354964 00:05:42.041 16:30:46 json_config -- common/autotest_common.sh@969 -- # kill 354964 00:05:42.041 16:30:46 json_config -- common/autotest_common.sh@974 -- # wait 354964 00:05:43.953 16:30:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.953 16:30:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:43.953 16:30:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.953 16:30:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.953 16:30:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:43.953 16:30:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:43.953 INFO: Success 00:05:43.953 00:05:43.953 real 0m16.086s 00:05:43.953 user 0m16.556s 00:05:43.953 sys 0m2.498s 00:05:43.953 16:30:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.953 16:30:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.953 ************************************ 00:05:43.953 END TEST json_config 00:05:43.953 ************************************ 00:05:43.953 16:30:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:43.953 16:30:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.953 16:30:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.953 16:30:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.953 ************************************ 00:05:43.953 START TEST json_config_extra_key 00:05:43.953 ************************************ 00:05:43.953 16:30:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:44.213 16:30:48 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.213 16:30:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.213 16:30:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.213 16:30:48 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.213 16:30:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.213 16:30:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.213 16:30:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.213 16:30:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:44.214 16:30:48 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.214 16:30:48 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.214 --rc genhtml_branch_coverage=1 00:05:44.214 --rc genhtml_function_coverage=1 00:05:44.214 --rc genhtml_legend=1 00:05:44.214 --rc geninfo_all_blocks=1 00:05:44.214 --rc geninfo_unexecuted_blocks=1 00:05:44.214 00:05:44.214 ' 00:05:44.214 16:30:48 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.214 --rc genhtml_branch_coverage=1 00:05:44.214 --rc genhtml_function_coverage=1 00:05:44.214 --rc genhtml_legend=1 00:05:44.214 --rc geninfo_all_blocks=1 00:05:44.214 --rc geninfo_unexecuted_blocks=1 00:05:44.214 00:05:44.214 ' 00:05:44.214 16:30:48 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.214 --rc genhtml_branch_coverage=1 00:05:44.214 --rc genhtml_function_coverage=1 00:05:44.214 --rc genhtml_legend=1 00:05:44.214 --rc geninfo_all_blocks=1 00:05:44.214 --rc geninfo_unexecuted_blocks=1 00:05:44.214 00:05:44.214 ' 00:05:44.214 16:30:48 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.214 --rc genhtml_branch_coverage=1 00:05:44.214 --rc genhtml_function_coverage=1 00:05:44.214 --rc genhtml_legend=1 00:05:44.214 --rc geninfo_all_blocks=1 00:05:44.214 --rc geninfo_unexecuted_blocks=1 00:05:44.214 00:05:44.214 ' 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.214 16:30:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.214 16:30:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.214 16:30:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.214 16:30:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.214 16:30:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:44.214 16:30:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.214 16:30:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.214 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:44.215 INFO: launching applications... 00:05:44.215 16:30:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=356240 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.215 Waiting for target to run... 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 356240 /var/tmp/spdk_tgt.sock 00:05:44.215 16:30:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 356240 ']' 00:05:44.215 16:30:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:44.215 16:30:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.215 16:30:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.215 16:30:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.215 16:30:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.215 16:30:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.215 [2024-10-14 16:30:48.790442] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:44.215 [2024-10-14 16:30:48.790491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356240 ] 00:05:44.781 [2024-10-14 16:30:49.234915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.781 [2024-10-14 16:30:49.291041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.040 16:30:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.040 16:30:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:45.040 00:05:45.040 16:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:45.040 INFO: shutting down applications... 00:05:45.040 16:30:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 356240 ]] 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 356240 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 356240 00:05:45.040 16:30:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 356240 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.609 16:30:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.609 SPDK target shutdown done 00:05:45.609 16:30:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.609 Success 00:05:45.609 00:05:45.609 real 0m1.576s 00:05:45.609 user 0m1.203s 00:05:45.609 sys 0m0.557s 00:05:45.609 16:30:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.609 16:30:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.609 ************************************ 00:05:45.609 END TEST json_config_extra_key 00:05:45.609 ************************************ 00:05:45.609 16:30:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.609 16:30:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.609 16:30:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.609 16:30:50 -- common/autotest_common.sh@10 -- # set +x 00:05:45.609 ************************************ 00:05:45.609 START TEST alias_rpc 00:05:45.609 ************************************ 00:05:45.609 16:30:50 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.868 * Looking for test storage... 00:05:45.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.868 16:30:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.868 --rc genhtml_branch_coverage=1 00:05:45.868 --rc genhtml_function_coverage=1 00:05:45.868 --rc genhtml_legend=1 00:05:45.868 --rc geninfo_all_blocks=1 00:05:45.868 --rc geninfo_unexecuted_blocks=1 00:05:45.868 00:05:45.868 ' 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.868 --rc genhtml_branch_coverage=1 00:05:45.868 --rc genhtml_function_coverage=1 00:05:45.868 --rc genhtml_legend=1 00:05:45.868 --rc geninfo_all_blocks=1 00:05:45.868 --rc geninfo_unexecuted_blocks=1 00:05:45.868 00:05:45.868 ' 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.868 --rc genhtml_branch_coverage=1 00:05:45.868 --rc genhtml_function_coverage=1 00:05:45.868 --rc genhtml_legend=1 00:05:45.868 --rc geninfo_all_blocks=1 00:05:45.868 --rc geninfo_unexecuted_blocks=1 00:05:45.868 00:05:45.868 ' 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.868 --rc genhtml_branch_coverage=1 00:05:45.868 --rc genhtml_function_coverage=1 00:05:45.868 --rc genhtml_legend=1 00:05:45.868 --rc geninfo_all_blocks=1 00:05:45.868 --rc geninfo_unexecuted_blocks=1 00:05:45.868 00:05:45.868 ' 00:05:45.868 16:30:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:45.868 16:30:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=356526 00:05:45.868 16:30:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.868 16:30:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 356526 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 356526 ']' 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.868 16:30:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.868 [2024-10-14 16:30:50.428102] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:45.868 [2024-10-14 16:30:50.428148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356526 ] 00:05:45.868 [2024-10-14 16:30:50.494948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.127 [2024-10-14 16:30:50.537939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.127 16:30:50 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.127 16:30:50 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.127 16:30:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:46.385 16:30:50 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 356526 00:05:46.385 16:30:50 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 356526 ']' 00:05:46.385 16:30:50 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 356526 00:05:46.385 16:30:50 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.385 16:30:50 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.385 16:30:50 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 356526 00:05:46.385 16:30:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.385 16:30:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.385 16:30:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 356526' 00:05:46.385 killing process with pid 356526 00:05:46.385 16:30:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 356526 00:05:46.386 16:30:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 356526 00:05:46.952 00:05:46.952 real 0m1.121s 00:05:46.952 user 0m1.136s 00:05:46.952 sys 0m0.410s 00:05:46.952 16:30:51 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.952 16:30:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.952 ************************************ 00:05:46.952 END TEST alias_rpc 00:05:46.952 ************************************ 00:05:46.952 16:30:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:46.952 16:30:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.952 16:30:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.952 16:30:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.952 16:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:46.952 ************************************ 00:05:46.952 START TEST spdkcli_tcp 00:05:46.952 ************************************ 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:46.952 * Looking for test storage... 00:05:46.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.952 16:30:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.952 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.952 --rc genhtml_branch_coverage=1 00:05:46.952 --rc genhtml_function_coverage=1 00:05:46.952 --rc genhtml_legend=1 00:05:46.953 --rc geninfo_all_blocks=1 00:05:46.953 --rc geninfo_unexecuted_blocks=1 00:05:46.953 00:05:46.953 ' 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.953 --rc genhtml_branch_coverage=1 00:05:46.953 --rc genhtml_function_coverage=1 00:05:46.953 --rc genhtml_legend=1 00:05:46.953 --rc geninfo_all_blocks=1 00:05:46.953 --rc geninfo_unexecuted_blocks=1 00:05:46.953 00:05:46.953 ' 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.953 --rc genhtml_branch_coverage=1 00:05:46.953 --rc genhtml_function_coverage=1 00:05:46.953 --rc genhtml_legend=1 00:05:46.953 --rc geninfo_all_blocks=1 00:05:46.953 --rc geninfo_unexecuted_blocks=1 00:05:46.953 00:05:46.953 ' 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.953 --rc genhtml_branch_coverage=1 00:05:46.953 --rc genhtml_function_coverage=1 00:05:46.953 --rc genhtml_legend=1 00:05:46.953 --rc geninfo_all_blocks=1 00:05:46.953 --rc geninfo_unexecuted_blocks=1 00:05:46.953 00:05:46.953 ' 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=356815 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 356815 00:05:46.953 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 356815 ']' 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.953 16:30:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.211 [2024-10-14 16:30:51.618883] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:47.211 [2024-10-14 16:30:51.618936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356815 ] 00:05:47.211 [2024-10-14 16:30:51.685452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.211 [2024-10-14 16:30:51.728174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.211 [2024-10-14 16:30:51.728175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.470 16:30:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.470 16:30:51 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:47.470 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=356830 00:05:47.470 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:47.470 16:30:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:47.728 [ 00:05:47.728 "bdev_malloc_delete", 00:05:47.728 "bdev_malloc_create", 00:05:47.728 "bdev_null_resize", 00:05:47.728 "bdev_null_delete", 00:05:47.728 "bdev_null_create", 00:05:47.728 "bdev_nvme_cuse_unregister", 00:05:47.728 "bdev_nvme_cuse_register", 00:05:47.728 "bdev_opal_new_user", 00:05:47.728 "bdev_opal_set_lock_state", 00:05:47.728 "bdev_opal_delete", 00:05:47.728 "bdev_opal_get_info", 00:05:47.728 "bdev_opal_create", 00:05:47.728 "bdev_nvme_opal_revert", 00:05:47.728 "bdev_nvme_opal_init", 00:05:47.728 "bdev_nvme_send_cmd", 00:05:47.728 "bdev_nvme_set_keys", 00:05:47.728 "bdev_nvme_get_path_iostat", 00:05:47.728 "bdev_nvme_get_mdns_discovery_info", 00:05:47.728 "bdev_nvme_stop_mdns_discovery", 00:05:47.728 "bdev_nvme_start_mdns_discovery", 00:05:47.728 "bdev_nvme_set_multipath_policy", 00:05:47.728 "bdev_nvme_set_preferred_path", 00:05:47.728 "bdev_nvme_get_io_paths", 00:05:47.728 "bdev_nvme_remove_error_injection", 00:05:47.728 "bdev_nvme_add_error_injection", 00:05:47.729 "bdev_nvme_get_discovery_info", 00:05:47.729 "bdev_nvme_stop_discovery", 00:05:47.729 "bdev_nvme_start_discovery", 00:05:47.729 "bdev_nvme_get_controller_health_info", 00:05:47.729 "bdev_nvme_disable_controller", 00:05:47.729 "bdev_nvme_enable_controller", 00:05:47.729 "bdev_nvme_reset_controller", 00:05:47.729 "bdev_nvme_get_transport_statistics", 00:05:47.729 "bdev_nvme_apply_firmware", 00:05:47.729 "bdev_nvme_detach_controller", 00:05:47.729 "bdev_nvme_get_controllers", 00:05:47.729 "bdev_nvme_attach_controller", 00:05:47.729 "bdev_nvme_set_hotplug", 00:05:47.729 "bdev_nvme_set_options", 00:05:47.729 "bdev_passthru_delete", 00:05:47.729 "bdev_passthru_create", 00:05:47.729 "bdev_lvol_set_parent_bdev", 00:05:47.729 "bdev_lvol_set_parent", 00:05:47.729 "bdev_lvol_check_shallow_copy", 00:05:47.729 "bdev_lvol_start_shallow_copy", 00:05:47.729 "bdev_lvol_grow_lvstore", 00:05:47.729 "bdev_lvol_get_lvols", 00:05:47.729 "bdev_lvol_get_lvstores", 00:05:47.729 "bdev_lvol_delete", 00:05:47.729 "bdev_lvol_set_read_only", 00:05:47.729 "bdev_lvol_resize", 00:05:47.729 "bdev_lvol_decouple_parent", 00:05:47.729 "bdev_lvol_inflate", 00:05:47.729 "bdev_lvol_rename", 00:05:47.729 "bdev_lvol_clone_bdev", 00:05:47.729 "bdev_lvol_clone", 00:05:47.729 "bdev_lvol_snapshot", 00:05:47.729 "bdev_lvol_create", 00:05:47.729 "bdev_lvol_delete_lvstore", 00:05:47.729 "bdev_lvol_rename_lvstore", 00:05:47.729 "bdev_lvol_create_lvstore", 00:05:47.729 "bdev_raid_set_options", 00:05:47.729 "bdev_raid_remove_base_bdev", 00:05:47.729 "bdev_raid_add_base_bdev", 00:05:47.729 "bdev_raid_delete", 00:05:47.729 "bdev_raid_create", 00:05:47.729 "bdev_raid_get_bdevs", 00:05:47.729 "bdev_error_inject_error", 00:05:47.729 "bdev_error_delete", 00:05:47.729 "bdev_error_create", 00:05:47.729 "bdev_split_delete", 00:05:47.729 "bdev_split_create", 00:05:47.729 "bdev_delay_delete", 00:05:47.729 "bdev_delay_create", 00:05:47.729 "bdev_delay_update_latency", 00:05:47.729 "bdev_zone_block_delete", 00:05:47.729 "bdev_zone_block_create", 00:05:47.729 "blobfs_create", 00:05:47.729 "blobfs_detect", 00:05:47.729 "blobfs_set_cache_size", 00:05:47.729 "bdev_aio_delete", 00:05:47.729 "bdev_aio_rescan", 00:05:47.729 "bdev_aio_create", 00:05:47.729 "bdev_ftl_set_property", 00:05:47.729 "bdev_ftl_get_properties", 00:05:47.729 "bdev_ftl_get_stats", 00:05:47.729 "bdev_ftl_unmap", 00:05:47.729 "bdev_ftl_unload", 00:05:47.729 "bdev_ftl_delete", 00:05:47.729 "bdev_ftl_load", 00:05:47.729 "bdev_ftl_create", 00:05:47.729 "bdev_virtio_attach_controller", 00:05:47.729 "bdev_virtio_scsi_get_devices", 00:05:47.729 "bdev_virtio_detach_controller", 00:05:47.729 "bdev_virtio_blk_set_hotplug", 00:05:47.729 "bdev_iscsi_delete", 00:05:47.729 "bdev_iscsi_create", 00:05:47.729 "bdev_iscsi_set_options", 00:05:47.729 "accel_error_inject_error", 00:05:47.729 "ioat_scan_accel_module", 00:05:47.729 "dsa_scan_accel_module", 00:05:47.729 "iaa_scan_accel_module", 00:05:47.729 "vfu_virtio_create_fs_endpoint", 00:05:47.729 "vfu_virtio_create_scsi_endpoint", 00:05:47.729 "vfu_virtio_scsi_remove_target", 00:05:47.729 "vfu_virtio_scsi_add_target", 00:05:47.729 "vfu_virtio_create_blk_endpoint", 00:05:47.729 "vfu_virtio_delete_endpoint", 00:05:47.729 "keyring_file_remove_key", 00:05:47.729 "keyring_file_add_key", 00:05:47.729 "keyring_linux_set_options", 00:05:47.729 "fsdev_aio_delete", 00:05:47.729 "fsdev_aio_create", 00:05:47.729 "iscsi_get_histogram", 00:05:47.729 "iscsi_enable_histogram", 00:05:47.729 "iscsi_set_options", 00:05:47.729 "iscsi_get_auth_groups", 00:05:47.729 "iscsi_auth_group_remove_secret", 00:05:47.729 "iscsi_auth_group_add_secret", 00:05:47.729 "iscsi_delete_auth_group", 00:05:47.729 "iscsi_create_auth_group", 00:05:47.729 "iscsi_set_discovery_auth", 00:05:47.729 "iscsi_get_options", 00:05:47.729 "iscsi_target_node_request_logout", 00:05:47.729 "iscsi_target_node_set_redirect", 00:05:47.729 "iscsi_target_node_set_auth", 00:05:47.729 "iscsi_target_node_add_lun", 00:05:47.729 "iscsi_get_stats", 00:05:47.729 "iscsi_get_connections", 00:05:47.729 "iscsi_portal_group_set_auth", 00:05:47.729 "iscsi_start_portal_group", 00:05:47.729 "iscsi_delete_portal_group", 00:05:47.729 "iscsi_create_portal_group", 00:05:47.729 "iscsi_get_portal_groups", 00:05:47.729 "iscsi_delete_target_node", 00:05:47.729 "iscsi_target_node_remove_pg_ig_maps", 00:05:47.729 "iscsi_target_node_add_pg_ig_maps", 00:05:47.729 "iscsi_create_target_node", 00:05:47.729 "iscsi_get_target_nodes", 00:05:47.729 "iscsi_delete_initiator_group", 00:05:47.729 "iscsi_initiator_group_remove_initiators", 00:05:47.729 "iscsi_initiator_group_add_initiators", 00:05:47.729 "iscsi_create_initiator_group", 00:05:47.729 "iscsi_get_initiator_groups", 00:05:47.729 "nvmf_set_crdt", 00:05:47.729 "nvmf_set_config", 00:05:47.729 "nvmf_set_max_subsystems", 00:05:47.729 "nvmf_stop_mdns_prr", 00:05:47.729 "nvmf_publish_mdns_prr", 00:05:47.729 "nvmf_subsystem_get_listeners", 00:05:47.729 "nvmf_subsystem_get_qpairs", 00:05:47.729 "nvmf_subsystem_get_controllers", 00:05:47.729 "nvmf_get_stats", 00:05:47.729 "nvmf_get_transports", 00:05:47.729 "nvmf_create_transport", 00:05:47.729 "nvmf_get_targets", 00:05:47.729 "nvmf_delete_target", 00:05:47.729 "nvmf_create_target", 00:05:47.729 "nvmf_subsystem_allow_any_host", 00:05:47.729 "nvmf_subsystem_set_keys", 00:05:47.729 "nvmf_subsystem_remove_host", 00:05:47.729 "nvmf_subsystem_add_host", 00:05:47.729 "nvmf_ns_remove_host", 00:05:47.729 "nvmf_ns_add_host", 00:05:47.729 "nvmf_subsystem_remove_ns", 00:05:47.729 "nvmf_subsystem_set_ns_ana_group", 00:05:47.729 "nvmf_subsystem_add_ns", 00:05:47.729 "nvmf_subsystem_listener_set_ana_state", 00:05:47.729 "nvmf_discovery_get_referrals", 00:05:47.729 "nvmf_discovery_remove_referral", 00:05:47.729 "nvmf_discovery_add_referral", 00:05:47.729 "nvmf_subsystem_remove_listener", 00:05:47.729 "nvmf_subsystem_add_listener", 00:05:47.729 "nvmf_delete_subsystem", 00:05:47.729 "nvmf_create_subsystem", 00:05:47.729 "nvmf_get_subsystems", 00:05:47.729 "env_dpdk_get_mem_stats", 00:05:47.729 "nbd_get_disks", 00:05:47.729 "nbd_stop_disk", 00:05:47.729 "nbd_start_disk", 00:05:47.729 "ublk_recover_disk", 00:05:47.729 "ublk_get_disks", 00:05:47.729 "ublk_stop_disk", 00:05:47.729 "ublk_start_disk", 00:05:47.729 "ublk_destroy_target", 00:05:47.729 "ublk_create_target", 00:05:47.729 "virtio_blk_create_transport", 00:05:47.729 "virtio_blk_get_transports", 00:05:47.729 "vhost_controller_set_coalescing", 00:05:47.729 "vhost_get_controllers", 00:05:47.729 "vhost_delete_controller", 00:05:47.729 "vhost_create_blk_controller", 00:05:47.729 "vhost_scsi_controller_remove_target", 00:05:47.729 "vhost_scsi_controller_add_target", 00:05:47.729 "vhost_start_scsi_controller", 00:05:47.729 "vhost_create_scsi_controller", 00:05:47.729 "thread_set_cpumask", 00:05:47.729 "scheduler_set_options", 00:05:47.729 "framework_get_governor", 00:05:47.729 "framework_get_scheduler", 00:05:47.729 "framework_set_scheduler", 00:05:47.729 "framework_get_reactors", 00:05:47.729 "thread_get_io_channels", 00:05:47.729 "thread_get_pollers", 00:05:47.729 "thread_get_stats", 00:05:47.729 "framework_monitor_context_switch", 00:05:47.729 "spdk_kill_instance", 00:05:47.729 "log_enable_timestamps", 00:05:47.729 "log_get_flags", 00:05:47.729 "log_clear_flag", 00:05:47.729 "log_set_flag", 00:05:47.729 "log_get_level", 00:05:47.729 "log_set_level", 00:05:47.729 "log_get_print_level", 00:05:47.729 "log_set_print_level", 00:05:47.729 "framework_enable_cpumask_locks", 00:05:47.729 "framework_disable_cpumask_locks", 00:05:47.729 "framework_wait_init", 00:05:47.729 "framework_start_init", 00:05:47.729 "scsi_get_devices", 00:05:47.729 "bdev_get_histogram", 00:05:47.729 "bdev_enable_histogram", 00:05:47.729 "bdev_set_qos_limit", 00:05:47.729 "bdev_set_qd_sampling_period", 00:05:47.729 "bdev_get_bdevs", 00:05:47.729 "bdev_reset_iostat", 00:05:47.729 "bdev_get_iostat", 00:05:47.729 "bdev_examine", 00:05:47.729 "bdev_wait_for_examine", 00:05:47.729 "bdev_set_options", 00:05:47.729 "accel_get_stats", 00:05:47.729 "accel_set_options", 00:05:47.729 "accel_set_driver", 00:05:47.729 "accel_crypto_key_destroy", 00:05:47.729 "accel_crypto_keys_get", 00:05:47.729 "accel_crypto_key_create", 00:05:47.729 "accel_assign_opc", 00:05:47.729 "accel_get_module_info", 00:05:47.729 "accel_get_opc_assignments", 00:05:47.729 "vmd_rescan", 00:05:47.729 "vmd_remove_device", 00:05:47.729 "vmd_enable", 00:05:47.729 "sock_get_default_impl", 00:05:47.729 "sock_set_default_impl", 00:05:47.729 "sock_impl_set_options", 00:05:47.729 "sock_impl_get_options", 00:05:47.729 "iobuf_get_stats", 00:05:47.729 "iobuf_set_options", 00:05:47.729 "keyring_get_keys", 00:05:47.729 "vfu_tgt_set_base_path", 00:05:47.729 "framework_get_pci_devices", 00:05:47.729 "framework_get_config", 00:05:47.729 "framework_get_subsystems", 00:05:47.729 "fsdev_set_opts", 00:05:47.729 "fsdev_get_opts", 00:05:47.729 "trace_get_info", 00:05:47.729 "trace_get_tpoint_group_mask", 00:05:47.729 "trace_disable_tpoint_group", 00:05:47.729 "trace_enable_tpoint_group", 00:05:47.729 "trace_clear_tpoint_mask", 00:05:47.729 "trace_set_tpoint_mask", 00:05:47.729 "notify_get_notifications", 00:05:47.729 "notify_get_types", 00:05:47.729 "spdk_get_version", 00:05:47.729 "rpc_get_methods" 00:05:47.729 ] 00:05:47.729 16:30:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:47.729 16:30:52 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.729 16:30:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.729 16:30:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:47.729 16:30:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 356815 00:05:47.729 16:30:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 356815 ']' 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 356815 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 356815 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 356815' 00:05:47.730 killing process with pid 356815 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 356815 00:05:47.730 16:30:52 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 356815 00:05:47.989 00:05:47.989 real 0m1.131s 00:05:47.989 user 0m1.910s 00:05:47.989 sys 0m0.437s 00:05:47.989 16:30:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.989 16:30:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.989 ************************************ 00:05:47.989 END TEST spdkcli_tcp 00:05:47.989 ************************************ 00:05:47.989 16:30:52 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.989 16:30:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.989 16:30:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.989 16:30:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.989 ************************************ 00:05:47.989 START TEST dpdk_mem_utility 00:05:47.989 ************************************ 00:05:47.989 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:48.250 * Looking for test storage... 00:05:48.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.250 16:30:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.250 --rc genhtml_branch_coverage=1 00:05:48.250 --rc genhtml_function_coverage=1 00:05:48.250 --rc genhtml_legend=1 00:05:48.250 --rc geninfo_all_blocks=1 00:05:48.250 --rc geninfo_unexecuted_blocks=1 00:05:48.250 00:05:48.250 ' 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.250 --rc genhtml_branch_coverage=1 00:05:48.250 --rc genhtml_function_coverage=1 00:05:48.250 --rc genhtml_legend=1 00:05:48.250 --rc geninfo_all_blocks=1 00:05:48.250 --rc geninfo_unexecuted_blocks=1 00:05:48.250 00:05:48.250 ' 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.250 --rc genhtml_branch_coverage=1 00:05:48.250 --rc genhtml_function_coverage=1 00:05:48.250 --rc genhtml_legend=1 00:05:48.250 --rc geninfo_all_blocks=1 00:05:48.250 --rc geninfo_unexecuted_blocks=1 00:05:48.250 00:05:48.250 ' 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:48.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.250 --rc genhtml_branch_coverage=1 00:05:48.250 --rc genhtml_function_coverage=1 00:05:48.250 --rc genhtml_legend=1 00:05:48.250 --rc geninfo_all_blocks=1 00:05:48.250 --rc geninfo_unexecuted_blocks=1 00:05:48.250 00:05:48.250 ' 00:05:48.250 16:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.250 16:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=357123 00:05:48.250 16:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 357123 00:05:48.250 16:30:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 357123 ']' 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.250 16:30:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.250 [2024-10-14 16:30:52.816438] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:48.250 [2024-10-14 16:30:52.816484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357123 ] 00:05:48.250 [2024-10-14 16:30:52.885414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.509 [2024-10-14 16:30:52.927263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.509 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.509 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:48.509 16:30:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.509 16:30:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.509 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.509 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.509 { 00:05:48.509 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.509 } 00:05:48.509 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.509 16:30:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:48.768 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:48.768 1 heaps totaling size 810.000000 MiB 00:05:48.768 size: 810.000000 MiB heap id: 0 00:05:48.768 end heaps---------- 00:05:48.768 9 mempools totaling size 595.772034 MiB 00:05:48.768 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.768 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.768 size: 92.545471 MiB name: bdev_io_357123 00:05:48.768 size: 50.003479 MiB name: msgpool_357123 00:05:48.768 size: 36.509338 MiB name: fsdev_io_357123 00:05:48.768 size: 21.763794 MiB name: PDU_Pool 00:05:48.768 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.768 size: 4.133484 MiB name: evtpool_357123 00:05:48.768 size: 0.026123 MiB name: Session_Pool 00:05:48.768 end mempools------- 00:05:48.768 6 memzones totaling size 4.142822 MiB 00:05:48.768 size: 1.000366 MiB name: RG_ring_0_357123 00:05:48.768 size: 1.000366 MiB name: RG_ring_1_357123 00:05:48.768 size: 1.000366 MiB name: RG_ring_4_357123 00:05:48.768 size: 1.000366 MiB name: RG_ring_5_357123 00:05:48.768 size: 0.125366 MiB name: RG_ring_2_357123 00:05:48.768 size: 0.015991 MiB name: RG_ring_3_357123 00:05:48.768 end memzones------- 00:05:48.768 16:30:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.768 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:48.768 list of free elements. size: 10.862488 MiB 00:05:48.768 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:48.768 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:48.768 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:48.768 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:48.768 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:48.768 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:48.768 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:48.768 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:48.768 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:48.768 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:48.768 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:48.768 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:48.768 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:48.768 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:48.768 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:48.768 list of standard malloc elements. size: 199.218628 MiB 00:05:48.768 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:48.768 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:48.768 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:48.769 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:48.769 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:48.769 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:48.769 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:48.769 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:48.769 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:48.769 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:48.769 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:48.769 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:48.769 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:48.769 list of memzone associated elements. size: 599.918884 MiB 00:05:48.769 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:48.769 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.769 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:48.769 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.769 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:48.769 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_357123_0 00:05:48.769 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:48.769 associated memzone info: size: 48.002930 MiB name: MP_msgpool_357123_0 00:05:48.769 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:48.769 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_357123_0 00:05:48.769 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:48.769 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.769 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:48.769 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.769 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:48.769 associated memzone info: size: 3.000122 MiB name: MP_evtpool_357123_0 00:05:48.769 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:48.769 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_357123 00:05:48.769 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:48.769 associated memzone info: size: 1.007996 MiB name: MP_evtpool_357123 00:05:48.769 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:48.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.769 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:48.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.769 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:48.769 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.769 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:48.769 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.769 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:48.769 associated memzone info: size: 1.000366 MiB name: RG_ring_0_357123 00:05:48.769 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:48.769 associated memzone info: size: 1.000366 MiB name: RG_ring_1_357123 00:05:48.769 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:48.769 associated memzone info: size: 1.000366 MiB name: RG_ring_4_357123 00:05:48.769 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:48.769 associated memzone info: size: 1.000366 MiB name: RG_ring_5_357123 00:05:48.769 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:48.769 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_357123 00:05:48.769 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:48.769 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_357123 00:05:48.769 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:48.769 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.769 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:48.769 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.769 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:48.769 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.769 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:48.769 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_357123 00:05:48.769 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:48.769 associated memzone info: size: 0.125366 MiB name: RG_ring_2_357123 00:05:48.769 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:48.769 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.769 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:48.769 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.769 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:48.769 associated memzone info: size: 0.015991 MiB name: RG_ring_3_357123 00:05:48.769 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:48.769 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.769 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:48.769 associated memzone info: size: 0.000183 MiB name: MP_msgpool_357123 00:05:48.769 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:48.769 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_357123 00:05:48.769 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:48.769 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_357123 00:05:48.769 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:48.769 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.769 16:30:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.769 16:30:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 357123 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 357123 ']' 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 357123 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 357123 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 357123' 00:05:48.769 killing process with pid 357123 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 357123 00:05:48.769 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 357123 00:05:49.028 00:05:49.028 real 0m0.999s 00:05:49.028 user 0m0.943s 00:05:49.028 sys 0m0.399s 00:05:49.028 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.028 16:30:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.028 ************************************ 00:05:49.028 END TEST dpdk_mem_utility 00:05:49.028 ************************************ 00:05:49.028 16:30:53 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.028 16:30:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.028 16:30:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.028 16:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:49.028 ************************************ 00:05:49.028 START TEST event 00:05:49.028 ************************************ 00:05:49.028 16:30:53 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:49.287 * Looking for test storage... 00:05:49.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:49.287 16:30:53 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.287 16:30:53 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.287 16:30:53 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.287 16:30:53 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.287 16:30:53 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.287 16:30:53 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.287 16:30:53 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.287 16:30:53 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.287 16:30:53 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.287 16:30:53 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.287 16:30:53 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.287 16:30:53 event -- scripts/common.sh@344 -- # case "$op" in 00:05:49.287 16:30:53 event -- scripts/common.sh@345 -- # : 1 00:05:49.287 16:30:53 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.287 16:30:53 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.287 16:30:53 event -- scripts/common.sh@365 -- # decimal 1 00:05:49.287 16:30:53 event -- scripts/common.sh@353 -- # local d=1 00:05:49.287 16:30:53 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.287 16:30:53 event -- scripts/common.sh@355 -- # echo 1 00:05:49.287 16:30:53 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.287 16:30:53 event -- scripts/common.sh@366 -- # decimal 2 00:05:49.287 16:30:53 event -- scripts/common.sh@353 -- # local d=2 00:05:49.287 16:30:53 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.287 16:30:53 event -- scripts/common.sh@355 -- # echo 2 00:05:49.287 16:30:53 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.287 16:30:53 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.287 16:30:53 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.287 16:30:53 event -- scripts/common.sh@368 -- # return 0 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:49.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.287 --rc genhtml_branch_coverage=1 00:05:49.287 --rc genhtml_function_coverage=1 00:05:49.287 --rc genhtml_legend=1 00:05:49.287 --rc geninfo_all_blocks=1 00:05:49.287 --rc geninfo_unexecuted_blocks=1 00:05:49.287 00:05:49.287 ' 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:49.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.287 --rc genhtml_branch_coverage=1 00:05:49.287 --rc genhtml_function_coverage=1 00:05:49.287 --rc genhtml_legend=1 00:05:49.287 --rc geninfo_all_blocks=1 00:05:49.287 --rc geninfo_unexecuted_blocks=1 00:05:49.287 00:05:49.287 ' 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:49.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.287 --rc genhtml_branch_coverage=1 00:05:49.287 --rc genhtml_function_coverage=1 00:05:49.287 --rc genhtml_legend=1 00:05:49.287 --rc geninfo_all_blocks=1 00:05:49.287 --rc geninfo_unexecuted_blocks=1 00:05:49.287 00:05:49.287 ' 00:05:49.287 16:30:53 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:49.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.287 --rc genhtml_branch_coverage=1 00:05:49.287 --rc genhtml_function_coverage=1 00:05:49.287 --rc genhtml_legend=1 00:05:49.287 --rc geninfo_all_blocks=1 00:05:49.287 --rc geninfo_unexecuted_blocks=1 00:05:49.287 00:05:49.287 ' 00:05:49.287 16:30:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:49.287 16:30:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:49.287 16:30:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.288 16:30:53 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:49.288 16:30:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.288 16:30:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.288 ************************************ 00:05:49.288 START TEST event_perf 00:05:49.288 ************************************ 00:05:49.288 16:30:53 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:49.288 Running I/O for 1 seconds...[2024-10-14 16:30:53.896364] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:49.288 [2024-10-14 16:30:53.896432] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357415 ] 00:05:49.546 [2024-10-14 16:30:53.968748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.546 [2024-10-14 16:30:54.012175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.546 [2024-10-14 16:30:54.012284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.546 [2024-10-14 16:30:54.012367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.546 [2024-10-14 16:30:54.012367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.481 Running I/O for 1 seconds... 00:05:50.481 lcore 0: 203389 00:05:50.481 lcore 1: 203388 00:05:50.481 lcore 2: 203389 00:05:50.481 lcore 3: 203390 00:05:50.481 done. 00:05:50.481 00:05:50.481 real 0m1.177s 00:05:50.481 user 0m4.093s 00:05:50.481 sys 0m0.082s 00:05:50.481 16:30:55 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.481 16:30:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.481 ************************************ 00:05:50.481 END TEST event_perf 00:05:50.481 ************************************ 00:05:50.481 16:30:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.481 16:30:55 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:50.481 16:30:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.481 16:30:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.739 ************************************ 00:05:50.739 START TEST event_reactor 00:05:50.739 ************************************ 00:05:50.739 16:30:55 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.739 [2024-10-14 16:30:55.141698] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:50.739 [2024-10-14 16:30:55.141769] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357581 ] 00:05:50.739 [2024-10-14 16:30:55.211325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.739 [2024-10-14 16:30:55.251117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.675 test_start 00:05:51.675 oneshot 00:05:51.675 tick 100 00:05:51.675 tick 100 00:05:51.675 tick 250 00:05:51.675 tick 100 00:05:51.675 tick 100 00:05:51.675 tick 100 00:05:51.675 tick 250 00:05:51.675 tick 500 00:05:51.675 tick 100 00:05:51.675 tick 100 00:05:51.675 tick 250 00:05:51.675 tick 100 00:05:51.675 tick 100 00:05:51.675 test_end 00:05:51.675 00:05:51.675 real 0m1.166s 00:05:51.675 user 0m1.086s 00:05:51.675 sys 0m0.077s 00:05:51.675 16:30:56 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.675 16:30:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:51.675 ************************************ 00:05:51.675 END TEST event_reactor 00:05:51.675 ************************************ 00:05:51.934 16:30:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.934 16:30:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:51.934 16:30:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.934 16:30:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.934 ************************************ 00:05:51.934 START TEST event_reactor_perf 00:05:51.934 ************************************ 00:05:51.934 16:30:56 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.934 [2024-10-14 16:30:56.375847] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:51.934 [2024-10-14 16:30:56.375903] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357744 ] 00:05:51.934 [2024-10-14 16:30:56.444127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.934 [2024-10-14 16:30:56.484224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.311 test_start 00:05:53.311 test_end 00:05:53.311 Performance: 520098 events per second 00:05:53.311 00:05:53.311 real 0m1.167s 00:05:53.311 user 0m1.086s 00:05:53.311 sys 0m0.076s 00:05:53.311 16:30:57 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.311 16:30:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.311 ************************************ 00:05:53.311 END TEST event_reactor_perf 00:05:53.311 ************************************ 00:05:53.311 16:30:57 event -- event/event.sh@49 -- # uname -s 00:05:53.311 16:30:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.311 16:30:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.311 16:30:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.311 16:30:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.311 16:30:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.311 ************************************ 00:05:53.311 START TEST event_scheduler 00:05:53.311 ************************************ 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:53.311 * Looking for test storage... 00:05:53.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.311 16:30:57 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:53.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.311 --rc genhtml_branch_coverage=1 00:05:53.311 --rc genhtml_function_coverage=1 00:05:53.311 --rc genhtml_legend=1 00:05:53.311 --rc geninfo_all_blocks=1 00:05:53.311 --rc geninfo_unexecuted_blocks=1 00:05:53.311 00:05:53.311 ' 00:05:53.311 16:30:57 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:53.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.311 --rc genhtml_branch_coverage=1 00:05:53.311 --rc genhtml_function_coverage=1 00:05:53.311 --rc genhtml_legend=1 00:05:53.311 --rc geninfo_all_blocks=1 00:05:53.311 --rc geninfo_unexecuted_blocks=1 00:05:53.311 00:05:53.311 ' 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:53.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.312 --rc genhtml_branch_coverage=1 00:05:53.312 --rc genhtml_function_coverage=1 00:05:53.312 --rc genhtml_legend=1 00:05:53.312 --rc geninfo_all_blocks=1 00:05:53.312 --rc geninfo_unexecuted_blocks=1 00:05:53.312 00:05:53.312 ' 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:53.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.312 --rc genhtml_branch_coverage=1 00:05:53.312 --rc genhtml_function_coverage=1 00:05:53.312 --rc genhtml_legend=1 00:05:53.312 --rc geninfo_all_blocks=1 00:05:53.312 --rc geninfo_unexecuted_blocks=1 00:05:53.312 00:05:53.312 ' 00:05:53.312 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.312 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=358050 00:05:53.312 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.312 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.312 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 358050 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 358050 ']' 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.312 16:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.312 [2024-10-14 16:30:57.820112] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:53.312 [2024-10-14 16:30:57.820159] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358050 ] 00:05:53.312 [2024-10-14 16:30:57.889432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.312 [2024-10-14 16:30:57.934614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.312 [2024-10-14 16:30:57.934716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.312 [2024-10-14 16:30:57.934732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.312 [2024-10-14 16:30:57.934739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:53.571 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 [2024-10-14 16:30:57.991369] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:53.571 [2024-10-14 16:30:57.991385] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:53.571 [2024-10-14 16:30:57.991397] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:53.571 [2024-10-14 16:30:57.991402] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:53.571 [2024-10-14 16:30:57.991407] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 [2024-10-14 16:30:58.064859] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.571 16:30:58 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.571 16:30:58 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.571 16:30:58 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 ************************************ 00:05:53.571 START TEST scheduler_create_thread 00:05:53.571 ************************************ 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 2 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 3 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 4 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 5 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 6 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 7 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 8 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 9 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 10 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.571 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.139 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.139 16:30:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.139 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.139 16:30:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.515 16:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.515 16:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.515 16:31:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.515 16:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.515 16:31:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.964 16:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.964 00:05:56.964 real 0m3.102s 00:05:56.964 user 0m0.024s 00:05:56.964 sys 0m0.005s 00:05:56.964 16:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.964 16:31:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.964 ************************************ 00:05:56.964 END TEST scheduler_create_thread 00:05:56.964 ************************************ 00:05:56.964 16:31:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.964 16:31:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 358050 00:05:56.964 16:31:01 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 358050 ']' 00:05:56.964 16:31:01 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 358050 00:05:56.964 16:31:01 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:56.964 16:31:01 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.964 16:31:01 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 358050 00:05:56.965 16:31:01 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:56.965 16:31:01 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:56.965 16:31:01 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 358050' 00:05:56.965 killing process with pid 358050 00:05:56.965 16:31:01 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 358050 00:05:56.965 16:31:01 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 358050 00:05:56.965 [2024-10-14 16:31:01.584150] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:57.230 00:05:57.230 real 0m4.172s 00:05:57.230 user 0m6.712s 00:05:57.230 sys 0m0.368s 00:05:57.230 16:31:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.230 16:31:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.230 ************************************ 00:05:57.230 END TEST event_scheduler 00:05:57.230 ************************************ 00:05:57.230 16:31:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.230 16:31:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.230 16:31:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.230 16:31:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.230 16:31:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.230 ************************************ 00:05:57.230 START TEST app_repeat 00:05:57.230 ************************************ 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=358745 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 358745' 00:05:57.230 Process app_repeat pid: 358745 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.230 spdk_app_start Round 0 00:05:57.230 16:31:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 358745 /var/tmp/spdk-nbd.sock 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 358745 ']' 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.230 16:31:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.489 [2024-10-14 16:31:01.881962] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:05:57.489 [2024-10-14 16:31:01.882026] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358745 ] 00:05:57.489 [2024-10-14 16:31:01.953581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.489 [2024-10-14 16:31:01.997397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.489 [2024-10-14 16:31:01.997399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.489 16:31:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.489 16:31:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.489 16:31:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.748 Malloc0 00:05:57.748 16:31:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.006 Malloc1 00:05:58.006 16:31:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.006 16:31:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.266 /dev/nbd0 00:05:58.266 16:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.266 16:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.266 1+0 records in 00:05:58.266 1+0 records out 00:05:58.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188074 s, 21.8 MB/s 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.266 16:31:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.266 16:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.266 16:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.266 16:31:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.524 /dev/nbd1 00:05:58.524 16:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.524 16:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.524 1+0 records in 00:05:58.524 1+0 records out 00:05:58.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020361 s, 20.1 MB/s 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.524 16:31:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.525 16:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.525 16:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.525 16:31:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.525 16:31:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.525 16:31:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.783 { 00:05:58.783 "nbd_device": "/dev/nbd0", 00:05:58.783 "bdev_name": "Malloc0" 00:05:58.783 }, 00:05:58.783 { 00:05:58.783 "nbd_device": "/dev/nbd1", 00:05:58.783 "bdev_name": "Malloc1" 00:05:58.783 } 00:05:58.783 ]' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.783 { 00:05:58.783 "nbd_device": "/dev/nbd0", 00:05:58.783 "bdev_name": "Malloc0" 00:05:58.783 }, 00:05:58.783 { 00:05:58.783 "nbd_device": "/dev/nbd1", 00:05:58.783 "bdev_name": "Malloc1" 00:05:58.783 } 00:05:58.783 ]' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.783 /dev/nbd1' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.783 /dev/nbd1' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.783 256+0 records in 00:05:58.783 256+0 records out 00:05:58.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108418 s, 96.7 MB/s 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.783 256+0 records in 00:05:58.783 256+0 records out 00:05:58.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132722 s, 79.0 MB/s 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.783 16:31:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.783 256+0 records in 00:05:58.783 256+0 records out 00:05:58.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149541 s, 70.1 MB/s 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.784 16:31:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.042 16:31:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.301 16:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.560 16:31:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.560 16:31:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.560 16:31:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.819 [2024-10-14 16:31:04.322084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.819 [2024-10-14 16:31:04.360427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.819 [2024-10-14 16:31:04.360428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.819 [2024-10-14 16:31:04.401188] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.819 [2024-10-14 16:31:04.401227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:03.106 16:31:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:03.106 16:31:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:03.106 spdk_app_start Round 1 00:06:03.106 16:31:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 358745 /var/tmp/spdk-nbd.sock 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 358745 ']' 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.106 16:31:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:03.106 16:31:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.106 Malloc0 00:06:03.106 16:31:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:03.365 Malloc1 00:06:03.365 16:31:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.365 16:31:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.623 /dev/nbd0 00:06:03.623 16:31:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.623 16:31:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.623 1+0 records in 00:06:03.623 1+0 records out 00:06:03.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246716 s, 16.6 MB/s 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:03.623 16:31:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:03.623 16:31:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.623 16:31:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.623 16:31:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.623 /dev/nbd1 00:06:03.881 16:31:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.881 16:31:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:03.881 16:31:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.881 1+0 records in 00:06:03.881 1+0 records out 00:06:03.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232204 s, 17.6 MB/s 00:06:03.882 16:31:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.882 16:31:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:03.882 16:31:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.882 16:31:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:03.882 16:31:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.882 { 00:06:03.882 "nbd_device": "/dev/nbd0", 00:06:03.882 "bdev_name": "Malloc0" 00:06:03.882 }, 00:06:03.882 { 00:06:03.882 "nbd_device": "/dev/nbd1", 00:06:03.882 "bdev_name": "Malloc1" 00:06:03.882 } 00:06:03.882 ]' 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.882 { 00:06:03.882 "nbd_device": "/dev/nbd0", 00:06:03.882 "bdev_name": "Malloc0" 00:06:03.882 }, 00:06:03.882 { 00:06:03.882 "nbd_device": "/dev/nbd1", 00:06:03.882 "bdev_name": "Malloc1" 00:06:03.882 } 00:06:03.882 ]' 00:06:03.882 16:31:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.141 /dev/nbd1' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.141 /dev/nbd1' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.141 256+0 records in 00:06:04.141 256+0 records out 00:06:04.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102575 s, 102 MB/s 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.141 256+0 records in 00:06:04.141 256+0 records out 00:06:04.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146245 s, 71.7 MB/s 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.141 256+0 records in 00:06:04.141 256+0 records out 00:06:04.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152505 s, 68.8 MB/s 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.141 16:31:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.400 16:31:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.400 16:31:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.400 16:31:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.400 16:31:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.400 16:31:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.400 16:31:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.400 16:31:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.659 16:31:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.659 16:31:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.918 16:31:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.177 [2024-10-14 16:31:09.638346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.177 [2024-10-14 16:31:09.674754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.177 [2024-10-14 16:31:09.674755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.177 [2024-10-14 16:31:09.716012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.177 [2024-10-14 16:31:09.716052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.463 16:31:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.463 16:31:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:08.463 spdk_app_start Round 2 00:06:08.463 16:31:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 358745 /var/tmp/spdk-nbd.sock 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 358745 ']' 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.463 16:31:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:08.463 16:31:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.463 Malloc0 00:06:08.463 16:31:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.722 Malloc1 00:06:08.722 16:31:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.722 16:31:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.722 /dev/nbd0 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.981 1+0 records in 00:06:08.981 1+0 records out 00:06:08.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00360383 s, 1.1 MB/s 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.981 /dev/nbd1 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.981 16:31:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.981 1+0 records in 00:06:08.981 1+0 records out 00:06:08.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172304 s, 23.8 MB/s 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:08.981 16:31:13 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.240 16:31:13 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.240 16:31:13 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.240 { 00:06:09.240 "nbd_device": "/dev/nbd0", 00:06:09.240 "bdev_name": "Malloc0" 00:06:09.240 }, 00:06:09.240 { 00:06:09.240 "nbd_device": "/dev/nbd1", 00:06:09.240 "bdev_name": "Malloc1" 00:06:09.240 } 00:06:09.240 ]' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.240 { 00:06:09.240 "nbd_device": "/dev/nbd0", 00:06:09.240 "bdev_name": "Malloc0" 00:06:09.240 }, 00:06:09.240 { 00:06:09.240 "nbd_device": "/dev/nbd1", 00:06:09.240 "bdev_name": "Malloc1" 00:06:09.240 } 00:06:09.240 ]' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.240 /dev/nbd1' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.240 /dev/nbd1' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.240 16:31:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.500 256+0 records in 00:06:09.500 256+0 records out 00:06:09.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108256 s, 96.9 MB/s 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.500 256+0 records in 00:06:09.500 256+0 records out 00:06:09.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141131 s, 74.3 MB/s 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.500 256+0 records in 00:06:09.500 256+0 records out 00:06:09.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014926 s, 70.3 MB/s 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.500 16:31:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.759 16:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.019 16:31:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.019 16:31:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.278 16:31:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.536 [2024-10-14 16:31:14.980411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.536 [2024-10-14 16:31:15.017178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.536 [2024-10-14 16:31:15.017179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.537 [2024-10-14 16:31:15.057868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.537 [2024-10-14 16:31:15.057905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.823 16:31:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 358745 /var/tmp/spdk-nbd.sock 00:06:13.823 16:31:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 358745 ']' 00:06:13.823 16:31:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.823 16:31:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.823 16:31:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.823 16:31:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.823 16:31:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:13.823 16:31:18 event.app_repeat -- event/event.sh@39 -- # killprocess 358745 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 358745 ']' 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 358745 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 358745 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 358745' 00:06:13.823 killing process with pid 358745 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@969 -- # kill 358745 00:06:13.823 16:31:18 event.app_repeat -- common/autotest_common.sh@974 -- # wait 358745 00:06:13.823 spdk_app_start is called in Round 0. 00:06:13.823 Shutdown signal received, stop current app iteration 00:06:13.823 Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 reinitialization... 00:06:13.823 spdk_app_start is called in Round 1. 00:06:13.823 Shutdown signal received, stop current app iteration 00:06:13.823 Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 reinitialization... 00:06:13.823 spdk_app_start is called in Round 2. 00:06:13.823 Shutdown signal received, stop current app iteration 00:06:13.823 Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 reinitialization... 00:06:13.823 spdk_app_start is called in Round 3. 00:06:13.823 Shutdown signal received, stop current app iteration 00:06:13.823 16:31:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.824 16:31:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:13.824 00:06:13.824 real 0m16.376s 00:06:13.824 user 0m36.015s 00:06:13.824 sys 0m2.523s 00:06:13.824 16:31:18 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.824 16:31:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:13.824 ************************************ 00:06:13.824 END TEST app_repeat 00:06:13.824 ************************************ 00:06:13.824 16:31:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.824 16:31:18 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.824 16:31:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.824 16:31:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.824 16:31:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.824 ************************************ 00:06:13.824 START TEST cpu_locks 00:06:13.824 ************************************ 00:06:13.824 16:31:18 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.824 * Looking for test storage... 00:06:13.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.824 16:31:18 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:13.824 16:31:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:13.824 16:31:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:13.824 16:31:18 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:13.824 16:31:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.083 16:31:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:14.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.083 --rc genhtml_branch_coverage=1 00:06:14.083 --rc genhtml_function_coverage=1 00:06:14.083 --rc genhtml_legend=1 00:06:14.083 --rc geninfo_all_blocks=1 00:06:14.083 --rc geninfo_unexecuted_blocks=1 00:06:14.083 00:06:14.083 ' 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:14.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.083 --rc genhtml_branch_coverage=1 00:06:14.083 --rc genhtml_function_coverage=1 00:06:14.083 --rc genhtml_legend=1 00:06:14.083 --rc geninfo_all_blocks=1 00:06:14.083 --rc geninfo_unexecuted_blocks=1 00:06:14.083 00:06:14.083 ' 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:14.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.083 --rc genhtml_branch_coverage=1 00:06:14.083 --rc genhtml_function_coverage=1 00:06:14.083 --rc genhtml_legend=1 00:06:14.083 --rc geninfo_all_blocks=1 00:06:14.083 --rc geninfo_unexecuted_blocks=1 00:06:14.083 00:06:14.083 ' 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:14.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.083 --rc genhtml_branch_coverage=1 00:06:14.083 --rc genhtml_function_coverage=1 00:06:14.083 --rc genhtml_legend=1 00:06:14.083 --rc geninfo_all_blocks=1 00:06:14.083 --rc geninfo_unexecuted_blocks=1 00:06:14.083 00:06:14.083 ' 00:06:14.083 16:31:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:14.083 16:31:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:14.083 16:31:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:14.083 16:31:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.083 16:31:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.083 ************************************ 00:06:14.083 START TEST default_locks 00:06:14.083 ************************************ 00:06:14.083 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:14.083 16:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=361858 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 361858 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 361858 ']' 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.084 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.084 [2024-10-14 16:31:18.549319] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:14.084 [2024-10-14 16:31:18.549357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361858 ] 00:06:14.084 [2024-10-14 16:31:18.617012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.084 [2024-10-14 16:31:18.659030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.343 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.343 16:31:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:14.343 16:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 361858 00:06:14.343 16:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 361858 00:06:14.343 16:31:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.602 lslocks: write error 00:06:14.602 16:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 361858 00:06:14.602 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 361858 ']' 00:06:14.602 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 361858 00:06:14.602 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.602 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.602 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 361858 00:06:14.860 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.860 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.860 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 361858' 00:06:14.860 killing process with pid 361858 00:06:14.860 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 361858 00:06:14.860 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 361858 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 361858 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 361858 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 361858 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 361858 ']' 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (361858) - No such process 00:06:15.119 ERROR: process (pid: 361858) is no longer running 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.119 00:06:15.119 real 0m1.065s 00:06:15.119 user 0m1.018s 00:06:15.119 sys 0m0.493s 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.119 16:31:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.119 ************************************ 00:06:15.119 END TEST default_locks 00:06:15.119 ************************************ 00:06:15.119 16:31:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.119 16:31:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.120 16:31:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.120 16:31:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.120 ************************************ 00:06:15.120 START TEST default_locks_via_rpc 00:06:15.120 ************************************ 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=361995 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 361995 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 361995 ']' 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.120 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.120 [2024-10-14 16:31:19.687494] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:15.120 [2024-10-14 16:31:19.687538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361995 ] 00:06:15.120 [2024-10-14 16:31:19.737759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.378 [2024-10-14 16:31:19.781656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.378 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.378 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:15.378 16:31:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:15.378 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.378 16:31:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.378 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 361995 00:06:15.636 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 361995 00:06:15.636 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.894 16:31:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 361995 00:06:15.894 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 361995 ']' 00:06:15.894 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 361995 00:06:15.894 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 361995 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 361995' 00:06:15.895 killing process with pid 361995 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 361995 00:06:15.895 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 361995 00:06:16.461 00:06:16.461 real 0m1.169s 00:06:16.461 user 0m1.129s 00:06:16.461 sys 0m0.529s 00:06:16.461 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.461 16:31:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.461 ************************************ 00:06:16.461 END TEST default_locks_via_rpc 00:06:16.461 ************************************ 00:06:16.461 16:31:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:16.461 16:31:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.461 16:31:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.461 16:31:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.461 ************************************ 00:06:16.461 START TEST non_locking_app_on_locked_coremask 00:06:16.461 ************************************ 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=362249 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 362249 /var/tmp/spdk.sock 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 362249 ']' 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.461 16:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.461 [2024-10-14 16:31:20.919748] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:16.461 [2024-10-14 16:31:20.919787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362249 ] 00:06:16.461 [2024-10-14 16:31:20.986471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.461 [2024-10-14 16:31:21.028131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=362367 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 362367 /var/tmp/spdk2.sock 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 362367 ']' 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.720 16:31:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.720 [2024-10-14 16:31:21.295056] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:16.720 [2024-10-14 16:31:21.295102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362367 ] 00:06:16.979 [2024-10-14 16:31:21.370544] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.979 [2024-10-14 16:31:21.370570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.979 [2024-10-14 16:31:21.458768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.547 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.547 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.547 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 362249 00:06:17.547 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 362249 00:06:17.547 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.116 lslocks: write error 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 362249 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 362249 ']' 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 362249 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362249 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362249' 00:06:18.116 killing process with pid 362249 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 362249 00:06:18.116 16:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 362249 00:06:18.683 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 362367 00:06:18.683 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 362367 ']' 00:06:18.683 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 362367 00:06:18.683 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.683 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.683 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362367 00:06:18.942 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.942 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.942 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362367' 00:06:18.942 killing process with pid 362367 00:06:18.942 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 362367 00:06:18.942 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 362367 00:06:19.201 00:06:19.201 real 0m2.756s 00:06:19.201 user 0m2.875s 00:06:19.201 sys 0m0.933s 00:06:19.201 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.201 16:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.201 ************************************ 00:06:19.201 END TEST non_locking_app_on_locked_coremask 00:06:19.201 ************************************ 00:06:19.201 16:31:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.201 16:31:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.201 16:31:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.201 16:31:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.201 ************************************ 00:06:19.201 START TEST locking_app_on_unlocked_coremask 00:06:19.201 ************************************ 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=362746 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 362746 /var/tmp/spdk.sock 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 362746 ']' 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.201 16:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.201 [2024-10-14 16:31:23.748261] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:19.201 [2024-10-14 16:31:23.748306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362746 ] 00:06:19.201 [2024-10-14 16:31:23.806191] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.201 [2024-10-14 16:31:23.806217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.460 [2024-10-14 16:31:23.848070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=362901 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 362901 /var/tmp/spdk2.sock 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 362901 ']' 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.460 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.719 [2024-10-14 16:31:24.122377] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:19.719 [2024-10-14 16:31:24.122431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362901 ] 00:06:19.719 [2024-10-14 16:31:24.199069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.719 [2024-10-14 16:31:24.279277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.653 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.653 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.653 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 362901 00:06:20.653 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 362901 00:06:20.653 16:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.220 lslocks: write error 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 362746 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 362746 ']' 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 362746 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362746 00:06:21.220 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.221 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.221 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362746' 00:06:21.221 killing process with pid 362746 00:06:21.221 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 362746 00:06:21.221 16:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 362746 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 362901 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 362901 ']' 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 362901 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 362901 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 362901' 00:06:21.788 killing process with pid 362901 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 362901 00:06:21.788 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 362901 00:06:22.048 00:06:22.048 real 0m2.866s 00:06:22.048 user 0m3.018s 00:06:22.048 sys 0m0.945s 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.048 ************************************ 00:06:22.048 END TEST locking_app_on_unlocked_coremask 00:06:22.048 ************************************ 00:06:22.048 16:31:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:22.048 16:31:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.048 16:31:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.048 16:31:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.048 ************************************ 00:06:22.048 START TEST locking_app_on_locked_coremask 00:06:22.048 ************************************ 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=363276 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 363276 /var/tmp/spdk.sock 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 363276 ']' 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.048 16:31:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.048 [2024-10-14 16:31:26.681218] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:22.048 [2024-10-14 16:31:26.681257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363276 ] 00:06:22.307 [2024-10-14 16:31:26.749821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.307 [2024-10-14 16:31:26.791500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=363465 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 363465 /var/tmp/spdk2.sock 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 363465 /var/tmp/spdk2.sock 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 363465 /var/tmp/spdk2.sock 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 363465 ']' 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.566 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.566 [2024-10-14 16:31:27.055448] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:22.566 [2024-10-14 16:31:27.055496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363465 ] 00:06:22.566 [2024-10-14 16:31:27.128925] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 363276 has claimed it. 00:06:22.566 [2024-10-14 16:31:27.128965] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (363465) - No such process 00:06:23.134 ERROR: process (pid: 363465) is no longer running 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 363276 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 363276 00:06:23.134 16:31:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.738 lslocks: write error 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 363276 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 363276 ']' 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 363276 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 363276 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 363276' 00:06:23.738 killing process with pid 363276 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 363276 00:06:23.738 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 363276 00:06:23.998 00:06:23.998 real 0m1.813s 00:06:23.998 user 0m1.936s 00:06:23.998 sys 0m0.602s 00:06:23.998 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.998 16:31:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.998 ************************************ 00:06:23.998 END TEST locking_app_on_locked_coremask 00:06:23.998 ************************************ 00:06:23.998 16:31:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:23.998 16:31:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.998 16:31:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.998 16:31:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.998 ************************************ 00:06:23.998 START TEST locking_overlapped_coremask 00:06:23.998 ************************************ 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=363726 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 363726 /var/tmp/spdk.sock 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 363726 ']' 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.998 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.998 [2024-10-14 16:31:28.568901] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:23.998 [2024-10-14 16:31:28.568945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363726 ] 00:06:24.257 [2024-10-14 16:31:28.635100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.257 [2024-10-14 16:31:28.676133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.257 [2024-10-14 16:31:28.676240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.257 [2024-10-14 16:31:28.676241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.257 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.257 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=363742 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 363742 /var/tmp/spdk2.sock 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 363742 /var/tmp/spdk2.sock 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 363742 /var/tmp/spdk2.sock 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 363742 ']' 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.516 16:31:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.516 [2024-10-14 16:31:28.944473] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:24.516 [2024-10-14 16:31:28.944519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363742 ] 00:06:24.516 [2024-10-14 16:31:29.021095] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 363726 has claimed it. 00:06:24.516 [2024-10-14 16:31:29.021136] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (363742) - No such process 00:06:25.085 ERROR: process (pid: 363742) is no longer running 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 363726 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 363726 ']' 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 363726 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 363726 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 363726' 00:06:25.085 killing process with pid 363726 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 363726 00:06:25.085 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 363726 00:06:25.344 00:06:25.344 real 0m1.422s 00:06:25.344 user 0m3.931s 00:06:25.344 sys 0m0.395s 00:06:25.344 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.344 16:31:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.344 ************************************ 00:06:25.344 END TEST locking_overlapped_coremask 00:06:25.344 ************************************ 00:06:25.344 16:31:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:25.344 16:31:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.344 16:31:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.344 16:31:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.604 ************************************ 00:06:25.604 START TEST locking_overlapped_coremask_via_rpc 00:06:25.604 ************************************ 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=363998 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 363998 /var/tmp/spdk.sock 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 363998 ']' 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.604 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.604 [2024-10-14 16:31:30.065017] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:25.604 [2024-10-14 16:31:30.065060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363998 ] 00:06:25.604 [2024-10-14 16:31:30.134360] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.604 [2024-10-14 16:31:30.134385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.604 [2024-10-14 16:31:30.179866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.604 [2024-10-14 16:31:30.179902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.604 [2024-10-14 16:31:30.179902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=364008 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 364008 /var/tmp/spdk2.sock 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 364008 ']' 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.862 16:31:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.862 [2024-10-14 16:31:30.441102] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:25.862 [2024-10-14 16:31:30.441150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364008 ] 00:06:26.122 [2024-10-14 16:31:30.517599] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.122 [2024-10-14 16:31:30.517627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.122 [2024-10-14 16:31:30.605341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.122 [2024-10-14 16:31:30.608645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.122 [2024-10-14 16:31:30.608646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 [2024-10-14 16:31:31.294669] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 363998 has claimed it. 00:06:26.690 request: 00:06:26.690 { 00:06:26.690 "method": "framework_enable_cpumask_locks", 00:06:26.690 "req_id": 1 00:06:26.690 } 00:06:26.690 Got JSON-RPC error response 00:06:26.690 response: 00:06:26.690 { 00:06:26.690 "code": -32603, 00:06:26.690 "message": "Failed to claim CPU core: 2" 00:06:26.690 } 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 363998 /var/tmp/spdk.sock 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 363998 ']' 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.690 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 364008 /var/tmp/spdk2.sock 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 364008 ']' 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.949 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.208 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.209 00:06:27.209 real 0m1.703s 00:06:27.209 user 0m0.808s 00:06:27.209 sys 0m0.136s 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.209 16:31:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.209 ************************************ 00:06:27.209 END TEST locking_overlapped_coremask_via_rpc 00:06:27.209 ************************************ 00:06:27.209 16:31:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:27.209 16:31:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 363998 ]] 00:06:27.209 16:31:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 363998 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 363998 ']' 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 363998 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 363998 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 363998' 00:06:27.209 killing process with pid 363998 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 363998 00:06:27.209 16:31:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 363998 00:06:27.778 16:31:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 364008 ]] 00:06:27.778 16:31:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 364008 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 364008 ']' 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 364008 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 364008 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 364008' 00:06:27.778 killing process with pid 364008 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 364008 00:06:27.778 16:31:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 364008 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 363998 ]] 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 363998 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 363998 ']' 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 363998 00:06:28.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (363998) - No such process 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 363998 is not found' 00:06:28.038 Process with pid 363998 is not found 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 364008 ]] 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 364008 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 364008 ']' 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 364008 00:06:28.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (364008) - No such process 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 364008 is not found' 00:06:28.038 Process with pid 364008 is not found 00:06:28.038 16:31:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:28.038 00:06:28.038 real 0m14.179s 00:06:28.038 user 0m24.458s 00:06:28.038 sys 0m4.990s 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.038 16:31:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.038 ************************************ 00:06:28.038 END TEST cpu_locks 00:06:28.038 ************************************ 00:06:28.038 00:06:28.038 real 0m38.848s 00:06:28.038 user 1m13.717s 00:06:28.038 sys 0m8.500s 00:06:28.038 16:31:32 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.038 16:31:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.038 ************************************ 00:06:28.038 END TEST event 00:06:28.038 ************************************ 00:06:28.038 16:31:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.038 16:31:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.038 16:31:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.038 16:31:32 -- common/autotest_common.sh@10 -- # set +x 00:06:28.038 ************************************ 00:06:28.038 START TEST thread 00:06:28.038 ************************************ 00:06:28.038 16:31:32 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:28.038 * Looking for test storage... 00:06:28.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:28.038 16:31:32 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.038 16:31:32 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.038 16:31:32 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.298 16:31:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.298 16:31:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.298 16:31:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.298 16:31:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.298 16:31:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.298 16:31:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.298 16:31:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.298 16:31:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.298 16:31:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.298 16:31:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.298 16:31:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.298 16:31:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:28.298 16:31:32 thread -- scripts/common.sh@345 -- # : 1 00:06:28.298 16:31:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.298 16:31:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.298 16:31:32 thread -- scripts/common.sh@365 -- # decimal 1 00:06:28.298 16:31:32 thread -- scripts/common.sh@353 -- # local d=1 00:06:28.298 16:31:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.298 16:31:32 thread -- scripts/common.sh@355 -- # echo 1 00:06:28.298 16:31:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.298 16:31:32 thread -- scripts/common.sh@366 -- # decimal 2 00:06:28.298 16:31:32 thread -- scripts/common.sh@353 -- # local d=2 00:06:28.298 16:31:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.298 16:31:32 thread -- scripts/common.sh@355 -- # echo 2 00:06:28.298 16:31:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.298 16:31:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.298 16:31:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.298 16:31:32 thread -- scripts/common.sh@368 -- # return 0 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.298 --rc genhtml_branch_coverage=1 00:06:28.298 --rc genhtml_function_coverage=1 00:06:28.298 --rc genhtml_legend=1 00:06:28.298 --rc geninfo_all_blocks=1 00:06:28.298 --rc geninfo_unexecuted_blocks=1 00:06:28.298 00:06:28.298 ' 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.298 --rc genhtml_branch_coverage=1 00:06:28.298 --rc genhtml_function_coverage=1 00:06:28.298 --rc genhtml_legend=1 00:06:28.298 --rc geninfo_all_blocks=1 00:06:28.298 --rc geninfo_unexecuted_blocks=1 00:06:28.298 00:06:28.298 ' 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.298 --rc genhtml_branch_coverage=1 00:06:28.298 --rc genhtml_function_coverage=1 00:06:28.298 --rc genhtml_legend=1 00:06:28.298 --rc geninfo_all_blocks=1 00:06:28.298 --rc geninfo_unexecuted_blocks=1 00:06:28.298 00:06:28.298 ' 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.298 --rc genhtml_branch_coverage=1 00:06:28.298 --rc genhtml_function_coverage=1 00:06:28.298 --rc genhtml_legend=1 00:06:28.298 --rc geninfo_all_blocks=1 00:06:28.298 --rc geninfo_unexecuted_blocks=1 00:06:28.298 00:06:28.298 ' 00:06:28.298 16:31:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.298 16:31:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.298 ************************************ 00:06:28.298 START TEST thread_poller_perf 00:06:28.298 ************************************ 00:06:28.298 16:31:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:28.298 [2024-10-14 16:31:32.798965] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:28.298 [2024-10-14 16:31:32.799037] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364571 ] 00:06:28.298 [2024-10-14 16:31:32.868241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.298 [2024-10-14 16:31:32.908227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.298 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.676 [2024-10-14T14:31:34.310Z] ====================================== 00:06:29.676 [2024-10-14T14:31:34.310Z] busy:2104849448 (cyc) 00:06:29.676 [2024-10-14T14:31:34.310Z] total_run_count: 420000 00:06:29.676 [2024-10-14T14:31:34.310Z] tsc_hz: 2100000000 (cyc) 00:06:29.676 [2024-10-14T14:31:34.310Z] ====================================== 00:06:29.676 [2024-10-14T14:31:34.310Z] poller_cost: 5011 (cyc), 2386 (nsec) 00:06:29.676 00:06:29.676 real 0m1.171s 00:06:29.676 user 0m1.088s 00:06:29.676 sys 0m0.080s 00:06:29.676 16:31:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.676 16:31:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.676 ************************************ 00:06:29.676 END TEST thread_poller_perf 00:06:29.676 ************************************ 00:06:29.676 16:31:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.676 16:31:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:29.676 16:31:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.676 16:31:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.676 ************************************ 00:06:29.676 START TEST thread_poller_perf 00:06:29.676 ************************************ 00:06:29.676 16:31:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.676 [2024-10-14 16:31:34.040790] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:29.676 [2024-10-14 16:31:34.040859] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364820 ] 00:06:29.676 [2024-10-14 16:31:34.111642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.676 [2024-10-14 16:31:34.151061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.676 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:30.614 [2024-10-14T14:31:35.248Z] ====================================== 00:06:30.614 [2024-10-14T14:31:35.248Z] busy:2101482182 (cyc) 00:06:30.614 [2024-10-14T14:31:35.248Z] total_run_count: 5555000 00:06:30.614 [2024-10-14T14:31:35.248Z] tsc_hz: 2100000000 (cyc) 00:06:30.614 [2024-10-14T14:31:35.248Z] ====================================== 00:06:30.614 [2024-10-14T14:31:35.248Z] poller_cost: 378 (cyc), 180 (nsec) 00:06:30.614 00:06:30.614 real 0m1.168s 00:06:30.614 user 0m1.098s 00:06:30.614 sys 0m0.066s 00:06:30.614 16:31:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.614 16:31:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.614 ************************************ 00:06:30.614 END TEST thread_poller_perf 00:06:30.614 ************************************ 00:06:30.614 16:31:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:30.614 00:06:30.614 real 0m2.647s 00:06:30.614 user 0m2.338s 00:06:30.614 sys 0m0.323s 00:06:30.614 16:31:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.614 16:31:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.614 ************************************ 00:06:30.614 END TEST thread 00:06:30.614 ************************************ 00:06:30.873 16:31:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:30.873 16:31:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:30.873 16:31:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.873 16:31:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.873 16:31:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.873 ************************************ 00:06:30.873 START TEST app_cmdline 00:06:30.873 ************************************ 00:06:30.873 16:31:35 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:30.873 * Looking for test storage... 00:06:30.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:30.873 16:31:35 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.873 16:31:35 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.873 16:31:35 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.873 16:31:35 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.873 16:31:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.874 16:31:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.874 --rc genhtml_branch_coverage=1 00:06:30.874 --rc genhtml_function_coverage=1 00:06:30.874 --rc genhtml_legend=1 00:06:30.874 --rc geninfo_all_blocks=1 00:06:30.874 --rc geninfo_unexecuted_blocks=1 00:06:30.874 00:06:30.874 ' 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.874 --rc genhtml_branch_coverage=1 00:06:30.874 --rc genhtml_function_coverage=1 00:06:30.874 --rc genhtml_legend=1 00:06:30.874 --rc geninfo_all_blocks=1 00:06:30.874 --rc geninfo_unexecuted_blocks=1 00:06:30.874 00:06:30.874 ' 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.874 --rc genhtml_branch_coverage=1 00:06:30.874 --rc genhtml_function_coverage=1 00:06:30.874 --rc genhtml_legend=1 00:06:30.874 --rc geninfo_all_blocks=1 00:06:30.874 --rc geninfo_unexecuted_blocks=1 00:06:30.874 00:06:30.874 ' 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.874 --rc genhtml_branch_coverage=1 00:06:30.874 --rc genhtml_function_coverage=1 00:06:30.874 --rc genhtml_legend=1 00:06:30.874 --rc geninfo_all_blocks=1 00:06:30.874 --rc geninfo_unexecuted_blocks=1 00:06:30.874 00:06:30.874 ' 00:06:30.874 16:31:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:30.874 16:31:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=365113 00:06:30.874 16:31:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 365113 00:06:30.874 16:31:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 365113 ']' 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.874 16:31:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.133 [2024-10-14 16:31:35.516953] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:31.133 [2024-10-14 16:31:35.517002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365113 ] 00:06:31.133 [2024-10-14 16:31:35.585604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.133 [2024-10-14 16:31:35.625343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.392 16:31:35 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.392 16:31:35 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:31.392 16:31:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:31.392 { 00:06:31.392 "version": "SPDK v25.01-pre git sha1 d6f411c3e", 00:06:31.392 "fields": { 00:06:31.392 "major": 25, 00:06:31.392 "minor": 1, 00:06:31.392 "patch": 0, 00:06:31.392 "suffix": "-pre", 00:06:31.392 "commit": "d6f411c3e" 00:06:31.392 } 00:06:31.392 } 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:31.652 request: 00:06:31.652 { 00:06:31.652 "method": "env_dpdk_get_mem_stats", 00:06:31.652 "req_id": 1 00:06:31.652 } 00:06:31.652 Got JSON-RPC error response 00:06:31.652 response: 00:06:31.652 { 00:06:31.652 "code": -32601, 00:06:31.652 "message": "Method not found" 00:06:31.652 } 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.652 16:31:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 365113 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 365113 ']' 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 365113 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.652 16:31:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 365113 00:06:31.910 16:31:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.910 16:31:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.910 16:31:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 365113' 00:06:31.911 killing process with pid 365113 00:06:31.911 16:31:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 365113 00:06:31.911 16:31:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 365113 00:06:32.170 00:06:32.170 real 0m1.311s 00:06:32.170 user 0m1.527s 00:06:32.170 sys 0m0.432s 00:06:32.170 16:31:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.170 16:31:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 ************************************ 00:06:32.170 END TEST app_cmdline 00:06:32.170 ************************************ 00:06:32.170 16:31:36 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.170 16:31:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.170 16:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.170 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.170 ************************************ 00:06:32.170 START TEST version 00:06:32.170 ************************************ 00:06:32.170 16:31:36 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:32.170 * Looking for test storage... 00:06:32.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.170 16:31:36 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.170 16:31:36 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.170 16:31:36 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.430 16:31:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.430 16:31:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.430 16:31:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.430 16:31:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.430 16:31:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.430 16:31:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.430 16:31:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.430 16:31:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.430 16:31:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.430 16:31:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.430 16:31:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.430 16:31:36 version -- scripts/common.sh@344 -- # case "$op" in 00:06:32.430 16:31:36 version -- scripts/common.sh@345 -- # : 1 00:06:32.430 16:31:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.430 16:31:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.430 16:31:36 version -- scripts/common.sh@365 -- # decimal 1 00:06:32.430 16:31:36 version -- scripts/common.sh@353 -- # local d=1 00:06:32.430 16:31:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.430 16:31:36 version -- scripts/common.sh@355 -- # echo 1 00:06:32.430 16:31:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.430 16:31:36 version -- scripts/common.sh@366 -- # decimal 2 00:06:32.430 16:31:36 version -- scripts/common.sh@353 -- # local d=2 00:06:32.430 16:31:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.430 16:31:36 version -- scripts/common.sh@355 -- # echo 2 00:06:32.430 16:31:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.430 16:31:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.430 16:31:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.430 16:31:36 version -- scripts/common.sh@368 -- # return 0 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.430 --rc genhtml_branch_coverage=1 00:06:32.430 --rc genhtml_function_coverage=1 00:06:32.430 --rc genhtml_legend=1 00:06:32.430 --rc geninfo_all_blocks=1 00:06:32.430 --rc geninfo_unexecuted_blocks=1 00:06:32.430 00:06:32.430 ' 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.430 --rc genhtml_branch_coverage=1 00:06:32.430 --rc genhtml_function_coverage=1 00:06:32.430 --rc genhtml_legend=1 00:06:32.430 --rc geninfo_all_blocks=1 00:06:32.430 --rc geninfo_unexecuted_blocks=1 00:06:32.430 00:06:32.430 ' 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.430 --rc genhtml_branch_coverage=1 00:06:32.430 --rc genhtml_function_coverage=1 00:06:32.430 --rc genhtml_legend=1 00:06:32.430 --rc geninfo_all_blocks=1 00:06:32.430 --rc geninfo_unexecuted_blocks=1 00:06:32.430 00:06:32.430 ' 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.430 --rc genhtml_branch_coverage=1 00:06:32.430 --rc genhtml_function_coverage=1 00:06:32.430 --rc genhtml_legend=1 00:06:32.430 --rc geninfo_all_blocks=1 00:06:32.430 --rc geninfo_unexecuted_blocks=1 00:06:32.430 00:06:32.430 ' 00:06:32.430 16:31:36 version -- app/version.sh@17 -- # get_header_version major 00:06:32.430 16:31:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # cut -f2 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.430 16:31:36 version -- app/version.sh@17 -- # major=25 00:06:32.430 16:31:36 version -- app/version.sh@18 -- # get_header_version minor 00:06:32.430 16:31:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # cut -f2 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.430 16:31:36 version -- app/version.sh@18 -- # minor=1 00:06:32.430 16:31:36 version -- app/version.sh@19 -- # get_header_version patch 00:06:32.430 16:31:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # cut -f2 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.430 16:31:36 version -- app/version.sh@19 -- # patch=0 00:06:32.430 16:31:36 version -- app/version.sh@20 -- # get_header_version suffix 00:06:32.430 16:31:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # cut -f2 00:06:32.430 16:31:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:32.430 16:31:36 version -- app/version.sh@20 -- # suffix=-pre 00:06:32.430 16:31:36 version -- app/version.sh@22 -- # version=25.1 00:06:32.430 16:31:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:32.430 16:31:36 version -- app/version.sh@28 -- # version=25.1rc0 00:06:32.430 16:31:36 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.430 16:31:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.430 16:31:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:32.430 16:31:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:32.430 00:06:32.430 real 0m0.244s 00:06:32.430 user 0m0.148s 00:06:32.430 sys 0m0.139s 00:06:32.430 16:31:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.430 16:31:36 version -- common/autotest_common.sh@10 -- # set +x 00:06:32.430 ************************************ 00:06:32.430 END TEST version 00:06:32.430 ************************************ 00:06:32.430 16:31:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:32.430 16:31:36 -- spdk/autotest.sh@194 -- # uname -s 00:06:32.430 16:31:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:32.430 16:31:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:32.430 16:31:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:32.430 16:31:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:32.430 16:31:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:32.430 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.430 16:31:36 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:32.430 16:31:36 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:32.430 16:31:36 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.430 16:31:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.430 16:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.430 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:06:32.430 ************************************ 00:06:32.430 START TEST nvmf_tcp 00:06:32.430 ************************************ 00:06:32.430 16:31:37 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:32.690 * Looking for test storage... 00:06:32.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.690 16:31:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.690 --rc genhtml_branch_coverage=1 00:06:32.690 --rc genhtml_function_coverage=1 00:06:32.690 --rc genhtml_legend=1 00:06:32.690 --rc geninfo_all_blocks=1 00:06:32.690 --rc geninfo_unexecuted_blocks=1 00:06:32.690 00:06:32.690 ' 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.690 --rc genhtml_branch_coverage=1 00:06:32.690 --rc genhtml_function_coverage=1 00:06:32.690 --rc genhtml_legend=1 00:06:32.690 --rc geninfo_all_blocks=1 00:06:32.690 --rc geninfo_unexecuted_blocks=1 00:06:32.690 00:06:32.690 ' 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.690 --rc genhtml_branch_coverage=1 00:06:32.690 --rc genhtml_function_coverage=1 00:06:32.690 --rc genhtml_legend=1 00:06:32.690 --rc geninfo_all_blocks=1 00:06:32.690 --rc geninfo_unexecuted_blocks=1 00:06:32.690 00:06:32.690 ' 00:06:32.690 16:31:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.690 --rc genhtml_branch_coverage=1 00:06:32.690 --rc genhtml_function_coverage=1 00:06:32.690 --rc genhtml_legend=1 00:06:32.690 --rc geninfo_all_blocks=1 00:06:32.690 --rc geninfo_unexecuted_blocks=1 00:06:32.690 00:06:32.690 ' 00:06:32.690 16:31:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:32.690 16:31:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:32.690 16:31:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:32.691 16:31:37 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.691 16:31:37 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.691 16:31:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.691 ************************************ 00:06:32.691 START TEST nvmf_target_core 00:06:32.691 ************************************ 00:06:32.691 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:32.951 * Looking for test storage... 00:06:32.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.951 --rc genhtml_branch_coverage=1 00:06:32.951 --rc genhtml_function_coverage=1 00:06:32.951 --rc genhtml_legend=1 00:06:32.951 --rc geninfo_all_blocks=1 00:06:32.951 --rc geninfo_unexecuted_blocks=1 00:06:32.951 00:06:32.951 ' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.951 --rc genhtml_branch_coverage=1 00:06:32.951 --rc genhtml_function_coverage=1 00:06:32.951 --rc genhtml_legend=1 00:06:32.951 --rc geninfo_all_blocks=1 00:06:32.951 --rc geninfo_unexecuted_blocks=1 00:06:32.951 00:06:32.951 ' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.951 --rc genhtml_branch_coverage=1 00:06:32.951 --rc genhtml_function_coverage=1 00:06:32.951 --rc genhtml_legend=1 00:06:32.951 --rc geninfo_all_blocks=1 00:06:32.951 --rc geninfo_unexecuted_blocks=1 00:06:32.951 00:06:32.951 ' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.951 --rc genhtml_branch_coverage=1 00:06:32.951 --rc genhtml_function_coverage=1 00:06:32.951 --rc genhtml_legend=1 00:06:32.951 --rc geninfo_all_blocks=1 00:06:32.951 --rc geninfo_unexecuted_blocks=1 00:06:32.951 00:06:32.951 ' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.951 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.952 ************************************ 00:06:32.952 START TEST nvmf_abort 00:06:32.952 ************************************ 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:32.952 * Looking for test storage... 00:06:32.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.952 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.212 --rc genhtml_branch_coverage=1 00:06:33.212 --rc genhtml_function_coverage=1 00:06:33.212 --rc genhtml_legend=1 00:06:33.212 --rc geninfo_all_blocks=1 00:06:33.212 --rc geninfo_unexecuted_blocks=1 00:06:33.212 00:06:33.212 ' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.212 --rc genhtml_branch_coverage=1 00:06:33.212 --rc genhtml_function_coverage=1 00:06:33.212 --rc genhtml_legend=1 00:06:33.212 --rc geninfo_all_blocks=1 00:06:33.212 --rc geninfo_unexecuted_blocks=1 00:06:33.212 00:06:33.212 ' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.212 --rc genhtml_branch_coverage=1 00:06:33.212 --rc genhtml_function_coverage=1 00:06:33.212 --rc genhtml_legend=1 00:06:33.212 --rc geninfo_all_blocks=1 00:06:33.212 --rc geninfo_unexecuted_blocks=1 00:06:33.212 00:06:33.212 ' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.212 --rc genhtml_branch_coverage=1 00:06:33.212 --rc genhtml_function_coverage=1 00:06:33.212 --rc genhtml_legend=1 00:06:33.212 --rc geninfo_all_blocks=1 00:06:33.212 --rc geninfo_unexecuted_blocks=1 00:06:33.212 00:06:33.212 ' 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.212 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.213 16:31:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:39.799 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:39.799 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.799 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:39.800 Found net devices under 0000:86:00.0: cvl_0_0 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:39.800 Found net devices under 0000:86:00.1: cvl_0_1 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:39.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:06:39.800 00:06:39.800 --- 10.0.0.2 ping statistics --- 00:06:39.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.800 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:06:39.800 00:06:39.800 --- 10.0.0.1 ping statistics --- 00:06:39.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.800 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=368794 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 368794 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 368794 ']' 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.800 16:31:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 [2024-10-14 16:31:43.790248] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:39.800 [2024-10-14 16:31:43.790289] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.800 [2024-10-14 16:31:43.863911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.800 [2024-10-14 16:31:43.905021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.800 [2024-10-14 16:31:43.905058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.800 [2024-10-14 16:31:43.905065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.800 [2024-10-14 16:31:43.905071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.800 [2024-10-14 16:31:43.905076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.800 [2024-10-14 16:31:43.906521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.800 [2024-10-14 16:31:43.906630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.800 [2024-10-14 16:31:43.906631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 [2024-10-14 16:31:44.053915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 Malloc0 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 Delay0 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:39.800 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 [2024-10-14 16:31:44.134761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.801 16:31:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:39.801 [2024-10-14 16:31:44.220759] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:41.704 Initializing NVMe Controllers 00:06:41.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:41.704 controller IO queue size 128 less than required 00:06:41.704 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:41.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:41.704 Initialization complete. Launching workers. 00:06:41.704 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37271 00:06:41.704 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37336, failed to submit 62 00:06:41.704 success 37275, unsuccessful 61, failed 0 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.704 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.704 rmmod nvme_tcp 00:06:41.704 rmmod nvme_fabrics 00:06:41.963 rmmod nvme_keyring 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 368794 ']' 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 368794 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 368794 ']' 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 368794 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 368794 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 368794' 00:06:41.963 killing process with pid 368794 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 368794 00:06:41.963 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 368794 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.222 16:31:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.175 00:06:44.175 real 0m11.190s 00:06:44.175 user 0m11.424s 00:06:44.175 sys 0m5.470s 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.175 ************************************ 00:06:44.175 END TEST nvmf_abort 00:06:44.175 ************************************ 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.175 ************************************ 00:06:44.175 START TEST nvmf_ns_hotplug_stress 00:06:44.175 ************************************ 00:06:44.175 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:44.435 * Looking for test storage... 00:06:44.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.435 --rc genhtml_branch_coverage=1 00:06:44.435 --rc genhtml_function_coverage=1 00:06:44.435 --rc genhtml_legend=1 00:06:44.435 --rc geninfo_all_blocks=1 00:06:44.435 --rc geninfo_unexecuted_blocks=1 00:06:44.435 00:06:44.435 ' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.435 --rc genhtml_branch_coverage=1 00:06:44.435 --rc genhtml_function_coverage=1 00:06:44.435 --rc genhtml_legend=1 00:06:44.435 --rc geninfo_all_blocks=1 00:06:44.435 --rc geninfo_unexecuted_blocks=1 00:06:44.435 00:06:44.435 ' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.435 --rc genhtml_branch_coverage=1 00:06:44.435 --rc genhtml_function_coverage=1 00:06:44.435 --rc genhtml_legend=1 00:06:44.435 --rc geninfo_all_blocks=1 00:06:44.435 --rc geninfo_unexecuted_blocks=1 00:06:44.435 00:06:44.435 ' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.435 --rc genhtml_branch_coverage=1 00:06:44.435 --rc genhtml_function_coverage=1 00:06:44.435 --rc genhtml_legend=1 00:06:44.435 --rc geninfo_all_blocks=1 00:06:44.435 --rc geninfo_unexecuted_blocks=1 00:06:44.435 00:06:44.435 ' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.435 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.436 16:31:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:51.007 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.007 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:51.007 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:51.007 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:51.007 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:51.008 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:51.008 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:51.008 Found net devices under 0000:86:00.0: cvl_0_0 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:51.008 Found net devices under 0000:86:00.1: cvl_0_1 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:51.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:06:51.008 00:06:51.008 --- 10.0.0.2 ping statistics --- 00:06:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.008 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:06:51.008 00:06:51.008 --- 10.0.0.1 ping statistics --- 00:06:51.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.008 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:51.008 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=372820 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 372820 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 372820 ']' 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.009 16:31:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:51.009 [2024-10-14 16:31:55.025570] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:06:51.009 [2024-10-14 16:31:55.025628] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.009 [2024-10-14 16:31:55.098006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.009 [2024-10-14 16:31:55.137904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.009 [2024-10-14 16:31:55.137939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.009 [2024-10-14 16:31:55.137945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.009 [2024-10-14 16:31:55.137952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.009 [2024-10-14 16:31:55.137956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.009 [2024-10-14 16:31:55.139376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.009 [2024-10-14 16:31:55.139465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.009 [2024-10-14 16:31:55.139466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:51.009 [2024-10-14 16:31:55.451763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.009 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:51.268 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.268 [2024-10-14 16:31:55.841171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.268 16:31:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.527 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:51.786 Malloc0 00:06:51.786 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:52.044 Delay0 00:06:52.045 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.045 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:52.303 NULL1 00:06:52.303 16:31:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:52.562 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=373089 00:06:52.562 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:52.562 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.562 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:52.821 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.821 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:52.821 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:53.079 true 00:06:53.079 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:53.079 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.338 16:31:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.596 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:53.596 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:53.596 true 00:06:53.596 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:53.596 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.855 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.113 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:54.113 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:54.372 true 00:06:54.372 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:54.372 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.372 16:31:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.631 16:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:54.631 16:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:54.889 true 00:06:54.889 16:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:54.889 16:31:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.824 Read completed with error (sct=0, sc=11) 00:06:55.824 16:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.082 16:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:56.082 16:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:56.340 true 00:06:56.340 16:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:56.340 16:32:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.276 16:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.276 16:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:57.276 16:32:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:57.534 true 00:06:57.534 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:57.534 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.792 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.051 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:58.051 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:58.051 true 00:06:58.051 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:58.051 16:32:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.428 16:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.428 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.428 16:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:59.428 16:32:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:59.687 true 00:06:59.687 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:06:59.687 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.946 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.946 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:59.946 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:00.206 true 00:07:00.206 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:00.206 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.464 16:32:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.723 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:00.723 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:00.723 true 00:07:00.723 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:00.723 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.983 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.241 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:01.241 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:01.501 true 00:07:01.501 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:01.501 16:32:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.435 16:32:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.693 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:02.693 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:02.951 true 00:07:02.951 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:02.951 16:32:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.888 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.888 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:03.888 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:04.146 true 00:07:04.146 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:04.146 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.405 16:32:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.405 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:04.405 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:04.663 true 00:07:04.663 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:04.663 16:32:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.039 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.039 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.039 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:06.039 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:06.297 true 00:07:06.297 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:06.297 16:32:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.233 16:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.233 16:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:07.233 16:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:07.492 true 00:07:07.492 16:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:07.492 16:32:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.751 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.751 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:07.751 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:08.010 true 00:07:08.010 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:08.010 16:32:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.387 16:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.387 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.387 16:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:09.387 16:32:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:09.646 true 00:07:09.646 16:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:09.646 16:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.581 16:32:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.581 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:10.581 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:10.839 true 00:07:10.839 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:10.839 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.096 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.096 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:11.096 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:11.352 true 00:07:11.352 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:11.352 16:32:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.725 16:32:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.725 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.725 16:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:12.725 16:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:12.984 true 00:07:12.984 16:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:12.984 16:32:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:13.916 16:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.916 16:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:13.916 16:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:14.176 true 00:07:14.176 16:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:14.176 16:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.434 16:32:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.434 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:14.434 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:14.693 true 00:07:14.693 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:14.693 16:32:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.069 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.069 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:16.069 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:16.329 true 00:07:16.329 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:16.329 16:32:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.265 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.265 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:17.265 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:17.524 true 00:07:17.524 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:17.524 16:32:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.783 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.783 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:17.783 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:18.041 true 00:07:18.041 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:18.041 16:32:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:18.978 16:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.237 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.237 16:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:19.237 16:32:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:19.497 true 00:07:19.497 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:19.497 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.755 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.014 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:20.014 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:20.014 true 00:07:20.014 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:20.014 16:32:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.391 16:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:21.391 16:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:21.391 16:32:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:21.651 true 00:07:21.651 16:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:21.651 16:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.601 16:32:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.601 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:22.601 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:22.860 true 00:07:22.860 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:22.860 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.120 Initializing NVMe Controllers 00:07:23.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.120 Controller IO queue size 128, less than required. 00:07:23.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.120 Controller IO queue size 128, less than required. 00:07:23.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:23.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:23.120 Initialization complete. Launching workers. 00:07:23.120 ======================================================== 00:07:23.120 Latency(us) 00:07:23.120 Device Information : IOPS MiB/s Average min max 00:07:23.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1736.29 0.85 45435.71 2497.41 1026406.06 00:07:23.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16044.13 7.83 7955.91 1705.29 439681.02 00:07:23.120 ======================================================== 00:07:23.120 Total : 17780.43 8.68 11615.89 1705.29 1026406.06 00:07:23.120 00:07:23.120 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.120 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:23.120 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:23.379 true 00:07:23.379 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 373089 00:07:23.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (373089) - No such process 00:07:23.379 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 373089 00:07:23.379 16:32:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.638 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.896 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:23.896 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:23.896 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:23.896 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:23.896 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:23.896 null0 00:07:24.155 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.155 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.155 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:24.155 null1 00:07:24.155 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.155 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.155 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:24.414 null2 00:07:24.414 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.414 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.414 16:32:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:24.673 null3 00:07:24.673 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.673 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.673 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:24.673 null4 00:07:24.933 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.933 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.933 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:24.933 null5 00:07:24.933 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:24.933 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:24.933 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:25.193 null6 00:07:25.193 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.193 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.193 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:25.453 null7 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.453 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:25.454 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 378702 378704 378705 378707 378709 378711 378713 378715 00:07:25.454 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:25.454 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:25.454 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:25.454 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.454 16:32:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.714 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.973 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.231 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.490 16:32:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:26.749 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.009 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.268 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.561 16:32:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.562 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:27.868 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.154 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.413 16:32:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.413 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.672 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.931 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.190 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.449 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.449 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.449 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.450 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.450 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.450 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.450 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.450 16:32:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.450 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.450 rmmod nvme_tcp 00:07:29.709 rmmod nvme_fabrics 00:07:29.709 rmmod nvme_keyring 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 372820 ']' 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 372820 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 372820 ']' 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 372820 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 372820 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 372820' 00:07:29.709 killing process with pid 372820 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 372820 00:07:29.709 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 372820 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.968 16:32:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.873 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.873 00:07:31.873 real 0m47.698s 00:07:31.874 user 3m14.671s 00:07:31.874 sys 0m15.383s 00:07:31.874 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.874 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:31.874 ************************************ 00:07:31.874 END TEST nvmf_ns_hotplug_stress 00:07:31.874 ************************************ 00:07:31.874 16:32:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:31.874 16:32:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:31.874 16:32:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.874 16:32:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:32.133 ************************************ 00:07:32.133 START TEST nvmf_delete_subsystem 00:07:32.133 ************************************ 00:07:32.133 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:32.133 * Looking for test storage... 00:07:32.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.133 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.134 --rc genhtml_branch_coverage=1 00:07:32.134 --rc genhtml_function_coverage=1 00:07:32.134 --rc genhtml_legend=1 00:07:32.134 --rc geninfo_all_blocks=1 00:07:32.134 --rc geninfo_unexecuted_blocks=1 00:07:32.134 00:07:32.134 ' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.134 --rc genhtml_branch_coverage=1 00:07:32.134 --rc genhtml_function_coverage=1 00:07:32.134 --rc genhtml_legend=1 00:07:32.134 --rc geninfo_all_blocks=1 00:07:32.134 --rc geninfo_unexecuted_blocks=1 00:07:32.134 00:07:32.134 ' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.134 --rc genhtml_branch_coverage=1 00:07:32.134 --rc genhtml_function_coverage=1 00:07:32.134 --rc genhtml_legend=1 00:07:32.134 --rc geninfo_all_blocks=1 00:07:32.134 --rc geninfo_unexecuted_blocks=1 00:07:32.134 00:07:32.134 ' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:32.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.134 --rc genhtml_branch_coverage=1 00:07:32.134 --rc genhtml_function_coverage=1 00:07:32.134 --rc genhtml_legend=1 00:07:32.134 --rc geninfo_all_blocks=1 00:07:32.134 --rc geninfo_unexecuted_blocks=1 00:07:32.134 00:07:32.134 ' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.134 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:32.135 16:32:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:38.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:38.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:38.713 Found net devices under 0000:86:00.0: cvl_0_0 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:38.713 Found net devices under 0000:86:00.1: cvl_0_1 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.713 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:07:38.714 00:07:38.714 --- 10.0.0.2 ping statistics --- 00:07:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.714 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:07:38.714 00:07:38.714 --- 10.0.0.1 ping statistics --- 00:07:38.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.714 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=383108 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 383108 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 383108 ']' 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.714 16:32:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 [2024-10-14 16:32:42.817263] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:07:38.714 [2024-10-14 16:32:42.817311] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.714 [2024-10-14 16:32:42.888458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.714 [2024-10-14 16:32:42.930880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.714 [2024-10-14 16:32:42.930915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.714 [2024-10-14 16:32:42.930923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.714 [2024-10-14 16:32:42.930929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.714 [2024-10-14 16:32:42.930934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.714 [2024-10-14 16:32:42.932123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.714 [2024-10-14 16:32:42.932126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 [2024-10-14 16:32:43.067211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 [2024-10-14 16:32:43.087403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 NULL1 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 Delay0 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=383157 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:38.714 16:32:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:38.714 [2024-10-14 16:32:43.188293] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:40.617 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.617 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.617 16:32:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Write completed with error (sct=0, sc=8) 00:07:40.876 starting I/O failed: -6 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Write completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 starting I/O failed: -6 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 starting I/O failed: -6 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.876 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 [2024-10-14 16:32:45.303258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825570 is same with the state(6) to be set 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 starting I/O failed: -6 00:07:40.877 starting I/O failed: -6 00:07:40.877 starting I/O failed: -6 00:07:40.877 starting I/O failed: -6 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Write completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 Read completed with error (sct=0, sc=8) 00:07:40.877 starting I/O failed: -6 00:07:40.878 Read completed with error (sct=0, sc=8) 00:07:40.878 Read completed with error (sct=0, sc=8) 00:07:40.878 starting I/O failed: -6 00:07:40.878 [2024-10-14 16:32:45.308730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4a58000c00 is same with the state(6) to be set 00:07:41.814 [2024-10-14 16:32:46.282360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a70 is same with the state(6) to be set 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 [2024-10-14 16:32:46.306388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825390 is same with the state(6) to be set 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 [2024-10-14 16:32:46.306665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x825750 is same with the state(6) to be set 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 [2024-10-14 16:32:46.309019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4a5800d7c0 is same with the state(6) to be set 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Write completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.814 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Write completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Write completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Write completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Write completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 Read completed with error (sct=0, sc=8) 00:07:41.815 [2024-10-14 16:32:46.311121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4a5800cfe0 is same with the state(6) to be set 00:07:41.815 Initializing NVMe Controllers 00:07:41.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.815 Controller IO queue size 128, less than required. 00:07:41.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:41.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:41.815 Initialization complete. Launching workers. 00:07:41.815 ======================================================== 00:07:41.815 Latency(us) 00:07:41.815 Device Information : IOPS MiB/s Average min max 00:07:41.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.81 0.08 906232.33 294.01 1005656.49 00:07:41.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.76 0.09 932744.33 301.55 2002277.85 00:07:41.815 ======================================================== 00:07:41.815 Total : 339.57 0.17 919877.07 294.01 2002277.85 00:07:41.815 00:07:41.815 [2024-10-14 16:32:46.311647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x826a70 (9): Bad file descriptor 00:07:41.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:41.815 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.815 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:41.815 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 383157 00:07:41.815 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 383157 00:07:42.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (383157) - No such process 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 383157 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 383157 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 383157 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.382 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.383 [2024-10-14 16:32:46.843423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=383822 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:42.383 16:32:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.383 [2024-10-14 16:32:46.919773] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:42.950 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.950 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:42.950 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.526 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.526 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:43.526 16:32:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:43.790 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.790 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:43.790 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.357 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.357 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:44.357 16:32:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:44.925 16:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:44.925 16:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:44.925 16:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.492 16:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.492 16:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:45.492 16:32:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.751 Initializing NVMe Controllers 00:07:45.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.751 Controller IO queue size 128, less than required. 00:07:45.751 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:45.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:45.751 Initialization complete. Launching workers. 00:07:45.751 ======================================================== 00:07:45.751 Latency(us) 00:07:45.751 Device Information : IOPS MiB/s Average min max 00:07:45.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002382.91 1000125.17 1041427.48 00:07:45.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003811.92 1000115.90 1040972.55 00:07:45.751 ======================================================== 00:07:45.751 Total : 256.00 0.12 1003097.42 1000115.90 1041427.48 00:07:45.751 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 383822 00:07:46.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (383822) - No such process 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 383822 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.010 rmmod nvme_tcp 00:07:46.010 rmmod nvme_fabrics 00:07:46.010 rmmod nvme_keyring 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 383108 ']' 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 383108 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 383108 ']' 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 383108 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 383108 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 383108' 00:07:46.010 killing process with pid 383108 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 383108 00:07:46.010 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 383108 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.270 16:32:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:48.201 00:07:48.201 real 0m16.223s 00:07:48.201 user 0m29.267s 00:07:48.201 sys 0m5.526s 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 ************************************ 00:07:48.201 END TEST nvmf_delete_subsystem 00:07:48.201 ************************************ 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 ************************************ 00:07:48.201 START TEST nvmf_host_management 00:07:48.201 ************************************ 00:07:48.201 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:48.461 * Looking for test storage... 00:07:48.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:48.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.461 --rc genhtml_branch_coverage=1 00:07:48.461 --rc genhtml_function_coverage=1 00:07:48.461 --rc genhtml_legend=1 00:07:48.461 --rc geninfo_all_blocks=1 00:07:48.461 --rc geninfo_unexecuted_blocks=1 00:07:48.461 00:07:48.461 ' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:48.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.461 --rc genhtml_branch_coverage=1 00:07:48.461 --rc genhtml_function_coverage=1 00:07:48.461 --rc genhtml_legend=1 00:07:48.461 --rc geninfo_all_blocks=1 00:07:48.461 --rc geninfo_unexecuted_blocks=1 00:07:48.461 00:07:48.461 ' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:48.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.461 --rc genhtml_branch_coverage=1 00:07:48.461 --rc genhtml_function_coverage=1 00:07:48.461 --rc genhtml_legend=1 00:07:48.461 --rc geninfo_all_blocks=1 00:07:48.461 --rc geninfo_unexecuted_blocks=1 00:07:48.461 00:07:48.461 ' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:48.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.461 --rc genhtml_branch_coverage=1 00:07:48.461 --rc genhtml_function_coverage=1 00:07:48.461 --rc genhtml_legend=1 00:07:48.461 --rc geninfo_all_blocks=1 00:07:48.461 --rc geninfo_unexecuted_blocks=1 00:07:48.461 00:07:48.461 ' 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.461 16:32:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.461 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:48.462 16:32:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:55.034 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:55.034 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:55.034 Found net devices under 0000:86:00.0: cvl_0_0 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:55.034 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:55.035 Found net devices under 0000:86:00.1: cvl_0_1 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:55.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:07:55.035 00:07:55.035 --- 10.0.0.2 ping statistics --- 00:07:55.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.035 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:07:55.035 00:07:55.035 --- 10.0.0.1 ping statistics --- 00:07:55.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.035 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:55.035 16:32:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=388053 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 388053 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 388053 ']' 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.035 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.035 [2024-10-14 16:32:59.094859] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:07:55.035 [2024-10-14 16:32:59.094911] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.035 [2024-10-14 16:32:59.169018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.035 [2024-10-14 16:32:59.210521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.035 [2024-10-14 16:32:59.210561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.035 [2024-10-14 16:32:59.210570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.035 [2024-10-14 16:32:59.210575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.035 [2024-10-14 16:32:59.210580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.035 [2024-10-14 16:32:59.212057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.035 [2024-10-14 16:32:59.212166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.035 [2024-10-14 16:32:59.212254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.035 [2024-10-14 16:32:59.212255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:55.295 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.295 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:55.295 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:55.295 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.295 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.555 [2024-10-14 16:32:59.966096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.555 16:32:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.555 Malloc0 00:07:55.555 [2024-10-14 16:33:00.044624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=388270 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 388270 /var/tmp/bdevperf.sock 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 388270 ']' 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:55.555 { 00:07:55.555 "params": { 00:07:55.555 "name": "Nvme$subsystem", 00:07:55.555 "trtype": "$TEST_TRANSPORT", 00:07:55.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.555 "adrfam": "ipv4", 00:07:55.555 "trsvcid": "$NVMF_PORT", 00:07:55.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.555 "hdgst": ${hdgst:-false}, 00:07:55.555 "ddgst": ${ddgst:-false} 00:07:55.555 }, 00:07:55.555 "method": "bdev_nvme_attach_controller" 00:07:55.555 } 00:07:55.555 EOF 00:07:55.555 )") 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:55.555 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:55.555 "params": { 00:07:55.555 "name": "Nvme0", 00:07:55.555 "trtype": "tcp", 00:07:55.555 "traddr": "10.0.0.2", 00:07:55.555 "adrfam": "ipv4", 00:07:55.555 "trsvcid": "4420", 00:07:55.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.555 "hdgst": false, 00:07:55.555 "ddgst": false 00:07:55.555 }, 00:07:55.555 "method": "bdev_nvme_attach_controller" 00:07:55.555 }' 00:07:55.555 [2024-10-14 16:33:00.142043] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:07:55.555 [2024-10-14 16:33:00.142094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388270 ] 00:07:55.814 [2024-10-14 16:33:00.214460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.814 [2024-10-14 16:33:00.255571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.814 Running I/O for 10 seconds... 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.073 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:56.074 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=672 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 672 -ge 100 ']' 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.335 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 [2024-10-14 16:33:00.851514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.335 [2024-10-14 16:33:00.851693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.335 [2024-10-14 16:33:00.851700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.851989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.851995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.336 [2024-10-14 16:33:00.852292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.336 [2024-10-14 16:33:00.852299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.337 [2024-10-14 16:33:00.852490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.852554] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x239c850 was disconnected and freed. reset controller. 00:07:56.337 [2024-10-14 16:33:00.853460] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:56.337 task offset: 103552 on job bdev=Nvme0n1 fails 00:07:56.337 00:07:56.337 Latency(us) 00:07:56.337 [2024-10-14T14:33:00.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.337 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.337 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:56.337 Verification LBA range: start 0x0 length 0x400 00:07:56.337 Nvme0n1 : 0.40 1913.84 119.62 159.49 0.00 30050.58 1513.57 26713.72 00:07:56.337 [2024-10-14T14:33:00.971Z] =================================================================================================================== 00:07:56.337 [2024-10-14T14:33:00.971Z] Total : 1913.84 119.62 159.49 0.00 30050.58 1513.57 26713.72 00:07:56.337 [2024-10-14 16:33:00.855833] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.337 [2024-10-14 16:33:00.855857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21835c0 (9): Bad file descriptor 00:07:56.337 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.337 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:56.337 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.337 [2024-10-14 16:33:00.857018] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:56.337 [2024-10-14 16:33:00.857098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:56.337 [2024-10-14 16:33:00.857121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.337 [2024-10-14 16:33:00.857134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:56.337 [2024-10-14 16:33:00.857142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:56.337 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.337 [2024-10-14 16:33:00.857149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:56.337 [2024-10-14 16:33:00.857156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21835c0 00:07:56.337 [2024-10-14 16:33:00.857173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21835c0 (9): Bad file descriptor 00:07:56.337 [2024-10-14 16:33:00.857184] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:56.337 [2024-10-14 16:33:00.857191] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:56.337 [2024-10-14 16:33:00.857198] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:56.337 [2024-10-14 16:33:00.857210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:56.337 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.337 16:33:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 388270 00:07:57.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (388270) - No such process 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:57.276 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:57.276 { 00:07:57.276 "params": { 00:07:57.277 "name": "Nvme$subsystem", 00:07:57.277 "trtype": "$TEST_TRANSPORT", 00:07:57.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.277 "adrfam": "ipv4", 00:07:57.277 "trsvcid": "$NVMF_PORT", 00:07:57.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.277 "hdgst": ${hdgst:-false}, 00:07:57.277 "ddgst": ${ddgst:-false} 00:07:57.277 }, 00:07:57.277 "method": "bdev_nvme_attach_controller" 00:07:57.277 } 00:07:57.277 EOF 00:07:57.277 )") 00:07:57.277 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:57.277 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:57.277 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:57.277 16:33:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:57.277 "params": { 00:07:57.277 "name": "Nvme0", 00:07:57.277 "trtype": "tcp", 00:07:57.277 "traddr": "10.0.0.2", 00:07:57.277 "adrfam": "ipv4", 00:07:57.277 "trsvcid": "4420", 00:07:57.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:57.277 "hdgst": false, 00:07:57.277 "ddgst": false 00:07:57.277 }, 00:07:57.277 "method": "bdev_nvme_attach_controller" 00:07:57.277 }' 00:07:57.535 [2024-10-14 16:33:01.918718] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:07:57.535 [2024-10-14 16:33:01.918763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388698 ] 00:07:57.535 [2024-10-14 16:33:01.983573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.535 [2024-10-14 16:33:02.023985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.794 Running I/O for 1 seconds... 00:07:58.730 2048.00 IOPS, 128.00 MiB/s 00:07:58.730 Latency(us) 00:07:58.730 [2024-10-14T14:33:03.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.730 Verification LBA range: start 0x0 length 0x400 00:07:58.730 Nvme0n1 : 1.03 2058.56 128.66 0.00 0.00 30607.50 5086.84 26713.72 00:07:58.730 [2024-10-14T14:33:03.364Z] =================================================================================================================== 00:07:58.730 [2024-10-14T14:33:03.364Z] Total : 2058.56 128.66 0.00 0.00 30607.50 5086.84 26713.72 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:58.989 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.990 rmmod nvme_tcp 00:07:58.990 rmmod nvme_fabrics 00:07:58.990 rmmod nvme_keyring 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 388053 ']' 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 388053 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 388053 ']' 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 388053 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 388053 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 388053' 00:07:58.990 killing process with pid 388053 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 388053 00:07:58.990 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 388053 00:07:59.250 [2024-10-14 16:33:03.699595] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:59.250 16:33:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:01.789 00:08:01.789 real 0m12.987s 00:08:01.789 user 0m22.025s 00:08:01.789 sys 0m5.620s 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:01.789 ************************************ 00:08:01.789 END TEST nvmf_host_management 00:08:01.789 ************************************ 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.789 ************************************ 00:08:01.789 START TEST nvmf_lvol 00:08:01.789 ************************************ 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:01.789 * Looking for test storage... 00:08:01.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:01.789 16:33:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.789 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:01.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.790 --rc genhtml_branch_coverage=1 00:08:01.790 --rc genhtml_function_coverage=1 00:08:01.790 --rc genhtml_legend=1 00:08:01.790 --rc geninfo_all_blocks=1 00:08:01.790 --rc geninfo_unexecuted_blocks=1 00:08:01.790 00:08:01.790 ' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:01.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.790 --rc genhtml_branch_coverage=1 00:08:01.790 --rc genhtml_function_coverage=1 00:08:01.790 --rc genhtml_legend=1 00:08:01.790 --rc geninfo_all_blocks=1 00:08:01.790 --rc geninfo_unexecuted_blocks=1 00:08:01.790 00:08:01.790 ' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:01.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.790 --rc genhtml_branch_coverage=1 00:08:01.790 --rc genhtml_function_coverage=1 00:08:01.790 --rc genhtml_legend=1 00:08:01.790 --rc geninfo_all_blocks=1 00:08:01.790 --rc geninfo_unexecuted_blocks=1 00:08:01.790 00:08:01.790 ' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:01.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.790 --rc genhtml_branch_coverage=1 00:08:01.790 --rc genhtml_function_coverage=1 00:08:01.790 --rc genhtml_legend=1 00:08:01.790 --rc geninfo_all_blocks=1 00:08:01.790 --rc geninfo_unexecuted_blocks=1 00:08:01.790 00:08:01.790 ' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.790 16:33:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:08.428 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:08.428 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:08.428 Found net devices under 0000:86:00.0: cvl_0_0 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:08.428 Found net devices under 0000:86:00.1: cvl_0_1 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.428 16:33:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:08:08.428 00:08:08.428 --- 10.0.0.2 ping statistics --- 00:08:08.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.428 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:08:08.428 00:08:08.428 --- 10.0.0.1 ping statistics --- 00:08:08.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.428 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:08.428 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=392863 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 392863 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 392863 ']' 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.429 [2024-10-14 16:33:12.143981] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:08:08.429 [2024-10-14 16:33:12.144021] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.429 [2024-10-14 16:33:12.216258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:08.429 [2024-10-14 16:33:12.257986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.429 [2024-10-14 16:33:12.258022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.429 [2024-10-14 16:33:12.258029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.429 [2024-10-14 16:33:12.258035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.429 [2024-10-14 16:33:12.258040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.429 [2024-10-14 16:33:12.259410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.429 [2024-10-14 16:33:12.259519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.429 [2024-10-14 16:33:12.259519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.429 [2024-10-14 16:33:12.568470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:08.429 16:33:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.429 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:08.429 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:08.688 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:08.947 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cd079157-b2ea-4c54-bbfe-eaa626abd841 00:08:08.947 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cd079157-b2ea-4c54-bbfe-eaa626abd841 lvol 20 00:08:09.205 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=42926f94-7786-412a-9d09-a13cdd40614a 00:08:09.205 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.205 16:33:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 42926f94-7786-412a-9d09-a13cdd40614a 00:08:09.462 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.720 [2024-10-14 16:33:14.219110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.720 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.978 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:09.978 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=393356 00:08:09.978 16:33:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:10.913 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 42926f94-7786-412a-9d09-a13cdd40614a MY_SNAPSHOT 00:08:11.173 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ad65c5e3-0e86-4090-808e-c1f3a6da8148 00:08:11.173 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 42926f94-7786-412a-9d09-a13cdd40614a 30 00:08:11.431 16:33:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ad65c5e3-0e86-4090-808e-c1f3a6da8148 MY_CLONE 00:08:11.690 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=39ea0293-9f3e-46bd-8512-08c481a177dc 00:08:11.690 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 39ea0293-9f3e-46bd-8512-08c481a177dc 00:08:12.258 16:33:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 393356 00:08:20.378 Initializing NVMe Controllers 00:08:20.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:20.378 Controller IO queue size 128, less than required. 00:08:20.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:20.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:20.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:20.378 Initialization complete. Launching workers. 00:08:20.378 ======================================================== 00:08:20.378 Latency(us) 00:08:20.378 Device Information : IOPS MiB/s Average min max 00:08:20.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12183.10 47.59 10507.40 1509.04 52360.65 00:08:20.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12307.30 48.08 10398.40 1206.84 56488.22 00:08:20.378 ======================================================== 00:08:20.378 Total : 24490.40 95.67 10452.62 1206.84 56488.22 00:08:20.378 00:08:20.378 16:33:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:20.637 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 42926f94-7786-412a-9d09-a13cdd40614a 00:08:20.637 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd079157-b2ea-4c54-bbfe-eaa626abd841 00:08:20.895 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.896 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.896 rmmod nvme_tcp 00:08:20.896 rmmod nvme_fabrics 00:08:20.896 rmmod nvme_keyring 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 392863 ']' 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 392863 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 392863 ']' 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 392863 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 392863 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 392863' 00:08:21.155 killing process with pid 392863 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 392863 00:08:21.155 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 392863 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.415 16:33:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:23.320 00:08:23.320 real 0m21.990s 00:08:23.320 user 1m3.218s 00:08:23.320 sys 0m7.614s 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:23.320 ************************************ 00:08:23.320 END TEST nvmf_lvol 00:08:23.320 ************************************ 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.320 ************************************ 00:08:23.320 START TEST nvmf_lvs_grow 00:08:23.320 ************************************ 00:08:23.320 16:33:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:23.580 * Looking for test storage... 00:08:23.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:23.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.580 --rc genhtml_branch_coverage=1 00:08:23.580 --rc genhtml_function_coverage=1 00:08:23.580 --rc genhtml_legend=1 00:08:23.580 --rc geninfo_all_blocks=1 00:08:23.580 --rc geninfo_unexecuted_blocks=1 00:08:23.580 00:08:23.580 ' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:23.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.580 --rc genhtml_branch_coverage=1 00:08:23.580 --rc genhtml_function_coverage=1 00:08:23.580 --rc genhtml_legend=1 00:08:23.580 --rc geninfo_all_blocks=1 00:08:23.580 --rc geninfo_unexecuted_blocks=1 00:08:23.580 00:08:23.580 ' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:23.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.580 --rc genhtml_branch_coverage=1 00:08:23.580 --rc genhtml_function_coverage=1 00:08:23.580 --rc genhtml_legend=1 00:08:23.580 --rc geninfo_all_blocks=1 00:08:23.580 --rc geninfo_unexecuted_blocks=1 00:08:23.580 00:08:23.580 ' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:23.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.580 --rc genhtml_branch_coverage=1 00:08:23.580 --rc genhtml_function_coverage=1 00:08:23.580 --rc genhtml_legend=1 00:08:23.580 --rc geninfo_all_blocks=1 00:08:23.580 --rc geninfo_unexecuted_blocks=1 00:08:23.580 00:08:23.580 ' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.580 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:23.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:23.581 16:33:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:30.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:30.151 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:30.151 Found net devices under 0000:86:00.0: cvl_0_0 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:30.151 Found net devices under 0000:86:00.1: cvl_0_1 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.151 16:33:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:30.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:08:30.152 00:08:30.152 --- 10.0.0.2 ping statistics --- 00:08:30.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.152 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:08:30.152 00:08:30.152 --- 10.0.0.1 ping statistics --- 00:08:30.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.152 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=398741 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 398741 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 398741 ']' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.152 [2024-10-14 16:33:34.235841] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:08:30.152 [2024-10-14 16:33:34.235891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.152 [2024-10-14 16:33:34.308724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.152 [2024-10-14 16:33:34.347494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.152 [2024-10-14 16:33:34.347529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.152 [2024-10-14 16:33:34.347536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.152 [2024-10-14 16:33:34.347542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.152 [2024-10-14 16:33:34.347547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.152 [2024-10-14 16:33:34.348110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:30.152 [2024-10-14 16:33:34.654590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.152 ************************************ 00:08:30.152 START TEST lvs_grow_clean 00:08:30.152 ************************************ 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.152 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.411 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:30.411 16:33:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:30.669 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:30.669 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:30.669 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.928 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.928 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.928 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 lvol 150 00:08:30.928 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4fe98a3d-1c45-474f-84a3-cea0e18b6793 00:08:30.928 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.928 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:31.187 [2024-10-14 16:33:35.670991] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:31.187 [2024-10-14 16:33:35.671042] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:31.187 true 00:08:31.187 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:31.187 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:31.445 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:31.445 16:33:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.445 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4fe98a3d-1c45-474f-84a3-cea0e18b6793 00:08:31.704 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.964 [2024-10-14 16:33:36.393154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=399237 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 399237 /var/tmp/bdevperf.sock 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 399237 ']' 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:31.964 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.223 [2024-10-14 16:33:36.619823] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:08:32.223 [2024-10-14 16:33:36.619873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399237 ] 00:08:32.223 [2024-10-14 16:33:36.686762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.223 [2024-10-14 16:33:36.727308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.223 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.223 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:32.223 16:33:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.482 Nvme0n1 00:08:32.482 16:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.740 [ 00:08:32.740 { 00:08:32.740 "name": "Nvme0n1", 00:08:32.740 "aliases": [ 00:08:32.740 "4fe98a3d-1c45-474f-84a3-cea0e18b6793" 00:08:32.740 ], 00:08:32.740 "product_name": "NVMe disk", 00:08:32.740 "block_size": 4096, 00:08:32.740 "num_blocks": 38912, 00:08:32.740 "uuid": "4fe98a3d-1c45-474f-84a3-cea0e18b6793", 00:08:32.740 "numa_id": 1, 00:08:32.740 "assigned_rate_limits": { 00:08:32.740 "rw_ios_per_sec": 0, 00:08:32.740 "rw_mbytes_per_sec": 0, 00:08:32.740 "r_mbytes_per_sec": 0, 00:08:32.740 "w_mbytes_per_sec": 0 00:08:32.740 }, 00:08:32.740 "claimed": false, 00:08:32.740 "zoned": false, 00:08:32.740 "supported_io_types": { 00:08:32.740 "read": true, 00:08:32.740 "write": true, 00:08:32.740 "unmap": true, 00:08:32.740 "flush": true, 00:08:32.740 "reset": true, 00:08:32.740 "nvme_admin": true, 00:08:32.740 "nvme_io": true, 00:08:32.740 "nvme_io_md": false, 00:08:32.740 "write_zeroes": true, 00:08:32.740 "zcopy": false, 00:08:32.740 "get_zone_info": false, 00:08:32.740 "zone_management": false, 00:08:32.740 "zone_append": false, 00:08:32.740 "compare": true, 00:08:32.740 "compare_and_write": true, 00:08:32.740 "abort": true, 00:08:32.740 "seek_hole": false, 00:08:32.740 "seek_data": false, 00:08:32.740 "copy": true, 00:08:32.740 "nvme_iov_md": false 00:08:32.740 }, 00:08:32.740 "memory_domains": [ 00:08:32.740 { 00:08:32.740 "dma_device_id": "system", 00:08:32.740 "dma_device_type": 1 00:08:32.740 } 00:08:32.740 ], 00:08:32.740 "driver_specific": { 00:08:32.740 "nvme": [ 00:08:32.740 { 00:08:32.740 "trid": { 00:08:32.740 "trtype": "TCP", 00:08:32.740 "adrfam": "IPv4", 00:08:32.740 "traddr": "10.0.0.2", 00:08:32.740 "trsvcid": "4420", 00:08:32.740 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.740 }, 00:08:32.740 "ctrlr_data": { 00:08:32.740 "cntlid": 1, 00:08:32.740 "vendor_id": "0x8086", 00:08:32.740 "model_number": "SPDK bdev Controller", 00:08:32.740 "serial_number": "SPDK0", 00:08:32.740 "firmware_revision": "25.01", 00:08:32.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.740 "oacs": { 00:08:32.740 "security": 0, 00:08:32.740 "format": 0, 00:08:32.740 "firmware": 0, 00:08:32.740 "ns_manage": 0 00:08:32.740 }, 00:08:32.741 "multi_ctrlr": true, 00:08:32.741 "ana_reporting": false 00:08:32.741 }, 00:08:32.741 "vs": { 00:08:32.741 "nvme_version": "1.3" 00:08:32.741 }, 00:08:32.741 "ns_data": { 00:08:32.741 "id": 1, 00:08:32.741 "can_share": true 00:08:32.741 } 00:08:32.741 } 00:08:32.741 ], 00:08:32.741 "mp_policy": "active_passive" 00:08:32.741 } 00:08:32.741 } 00:08:32.741 ] 00:08:32.741 16:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=399255 00:08:32.741 16:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.741 16:33:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.998 Running I/O for 10 seconds... 00:08:33.936 Latency(us) 00:08:33.936 [2024-10-14T14:33:38.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.936 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.936 Nvme0n1 : 1.00 23332.00 91.14 0.00 0.00 0.00 0.00 0.00 00:08:33.936 [2024-10-14T14:33:38.570Z] =================================================================================================================== 00:08:33.936 [2024-10-14T14:33:38.570Z] Total : 23332.00 91.14 0.00 0.00 0.00 0.00 0.00 00:08:33.936 00:08:34.873 16:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:34.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.873 Nvme0n1 : 2.00 23448.00 91.59 0.00 0.00 0.00 0.00 0.00 00:08:34.873 [2024-10-14T14:33:39.507Z] =================================================================================================================== 00:08:34.873 [2024-10-14T14:33:39.507Z] Total : 23448.00 91.59 0.00 0.00 0.00 0.00 0.00 00:08:34.873 00:08:34.873 true 00:08:34.873 16:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:34.873 16:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.132 16:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.132 16:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.132 16:33:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 399255 00:08:36.068 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.068 Nvme0n1 : 3.00 23433.00 91.54 0.00 0.00 0.00 0.00 0.00 00:08:36.068 [2024-10-14T14:33:40.702Z] =================================================================================================================== 00:08:36.068 [2024-10-14T14:33:40.702Z] Total : 23433.00 91.54 0.00 0.00 0.00 0.00 0.00 00:08:36.068 00:08:37.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.006 Nvme0n1 : 4.00 23530.50 91.92 0.00 0.00 0.00 0.00 0.00 00:08:37.006 [2024-10-14T14:33:41.640Z] =================================================================================================================== 00:08:37.006 [2024-10-14T14:33:41.640Z] Total : 23530.50 91.92 0.00 0.00 0.00 0.00 0.00 00:08:37.006 00:08:37.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.943 Nvme0n1 : 5.00 23592.20 92.16 0.00 0.00 0.00 0.00 0.00 00:08:37.943 [2024-10-14T14:33:42.577Z] =================================================================================================================== 00:08:37.943 [2024-10-14T14:33:42.577Z] Total : 23592.20 92.16 0.00 0.00 0.00 0.00 0.00 00:08:37.943 00:08:38.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.880 Nvme0n1 : 6.00 23641.17 92.35 0.00 0.00 0.00 0.00 0.00 00:08:38.880 [2024-10-14T14:33:43.514Z] =================================================================================================================== 00:08:38.880 [2024-10-14T14:33:43.514Z] Total : 23641.17 92.35 0.00 0.00 0.00 0.00 0.00 00:08:38.880 00:08:39.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.817 Nvme0n1 : 7.00 23672.86 92.47 0.00 0.00 0.00 0.00 0.00 00:08:39.817 [2024-10-14T14:33:44.451Z] =================================================================================================================== 00:08:39.817 [2024-10-14T14:33:44.451Z] Total : 23672.86 92.47 0.00 0.00 0.00 0.00 0.00 00:08:39.817 00:08:41.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.195 Nvme0n1 : 8.00 23685.88 92.52 0.00 0.00 0.00 0.00 0.00 00:08:41.195 [2024-10-14T14:33:45.829Z] =================================================================================================================== 00:08:41.195 [2024-10-14T14:33:45.829Z] Total : 23685.88 92.52 0.00 0.00 0.00 0.00 0.00 00:08:41.195 00:08:42.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.133 Nvme0n1 : 9.00 23716.78 92.64 0.00 0.00 0.00 0.00 0.00 00:08:42.133 [2024-10-14T14:33:46.767Z] =================================================================================================================== 00:08:42.133 [2024-10-14T14:33:46.767Z] Total : 23716.78 92.64 0.00 0.00 0.00 0.00 0.00 00:08:42.133 00:08:43.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.072 Nvme0n1 : 10.00 23729.30 92.69 0.00 0.00 0.00 0.00 0.00 00:08:43.072 [2024-10-14T14:33:47.706Z] =================================================================================================================== 00:08:43.072 [2024-10-14T14:33:47.706Z] Total : 23729.30 92.69 0.00 0.00 0.00 0.00 0.00 00:08:43.072 00:08:43.072 00:08:43.072 Latency(us) 00:08:43.072 [2024-10-14T14:33:47.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.072 Nvme0n1 : 10.01 23730.08 92.70 0.00 0.00 5391.20 3120.76 10860.25 00:08:43.072 [2024-10-14T14:33:47.706Z] =================================================================================================================== 00:08:43.072 [2024-10-14T14:33:47.706Z] Total : 23730.08 92.70 0.00 0.00 5391.20 3120.76 10860.25 00:08:43.072 { 00:08:43.072 "results": [ 00:08:43.072 { 00:08:43.072 "job": "Nvme0n1", 00:08:43.072 "core_mask": "0x2", 00:08:43.072 "workload": "randwrite", 00:08:43.072 "status": "finished", 00:08:43.072 "queue_depth": 128, 00:08:43.072 "io_size": 4096, 00:08:43.072 "runtime": 10.005064, 00:08:43.072 "iops": 23730.08308592529, 00:08:43.072 "mibps": 92.69563705439566, 00:08:43.072 "io_failed": 0, 00:08:43.072 "io_timeout": 0, 00:08:43.072 "avg_latency_us": 5391.202910802812, 00:08:43.072 "min_latency_us": 3120.7619047619046, 00:08:43.072 "max_latency_us": 10860.251428571428 00:08:43.072 } 00:08:43.072 ], 00:08:43.072 "core_count": 1 00:08:43.072 } 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 399237 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 399237 ']' 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 399237 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399237 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399237' 00:08:43.072 killing process with pid 399237 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 399237 00:08:43.072 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.072 00:08:43.072 Latency(us) 00:08:43.072 [2024-10-14T14:33:47.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.072 [2024-10-14T14:33:47.706Z] =================================================================================================================== 00:08:43.072 [2024-10-14T14:33:47.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 399237 00:08:43.072 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.331 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.589 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:43.589 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.589 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:43.589 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:43.589 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.848 [2024-10-14 16:33:48.371213] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:43.848 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:44.107 request: 00:08:44.107 { 00:08:44.107 "uuid": "dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49", 00:08:44.107 "method": "bdev_lvol_get_lvstores", 00:08:44.107 "req_id": 1 00:08:44.107 } 00:08:44.107 Got JSON-RPC error response 00:08:44.107 response: 00:08:44.107 { 00:08:44.107 "code": -19, 00:08:44.107 "message": "No such device" 00:08:44.107 } 00:08:44.107 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:44.107 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.107 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.107 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.107 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:44.366 aio_bdev 00:08:44.366 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4fe98a3d-1c45-474f-84a3-cea0e18b6793 00:08:44.366 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4fe98a3d-1c45-474f-84a3-cea0e18b6793 00:08:44.366 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.367 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:44.367 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.367 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.367 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:44.367 16:33:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4fe98a3d-1c45-474f-84a3-cea0e18b6793 -t 2000 00:08:44.626 [ 00:08:44.626 { 00:08:44.626 "name": "4fe98a3d-1c45-474f-84a3-cea0e18b6793", 00:08:44.626 "aliases": [ 00:08:44.626 "lvs/lvol" 00:08:44.626 ], 00:08:44.626 "product_name": "Logical Volume", 00:08:44.626 "block_size": 4096, 00:08:44.626 "num_blocks": 38912, 00:08:44.626 "uuid": "4fe98a3d-1c45-474f-84a3-cea0e18b6793", 00:08:44.626 "assigned_rate_limits": { 00:08:44.626 "rw_ios_per_sec": 0, 00:08:44.626 "rw_mbytes_per_sec": 0, 00:08:44.626 "r_mbytes_per_sec": 0, 00:08:44.626 "w_mbytes_per_sec": 0 00:08:44.626 }, 00:08:44.626 "claimed": false, 00:08:44.626 "zoned": false, 00:08:44.626 "supported_io_types": { 00:08:44.626 "read": true, 00:08:44.626 "write": true, 00:08:44.626 "unmap": true, 00:08:44.626 "flush": false, 00:08:44.626 "reset": true, 00:08:44.626 "nvme_admin": false, 00:08:44.626 "nvme_io": false, 00:08:44.626 "nvme_io_md": false, 00:08:44.626 "write_zeroes": true, 00:08:44.626 "zcopy": false, 00:08:44.626 "get_zone_info": false, 00:08:44.626 "zone_management": false, 00:08:44.626 "zone_append": false, 00:08:44.626 "compare": false, 00:08:44.626 "compare_and_write": false, 00:08:44.626 "abort": false, 00:08:44.626 "seek_hole": true, 00:08:44.626 "seek_data": true, 00:08:44.626 "copy": false, 00:08:44.626 "nvme_iov_md": false 00:08:44.626 }, 00:08:44.626 "driver_specific": { 00:08:44.626 "lvol": { 00:08:44.626 "lvol_store_uuid": "dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49", 00:08:44.626 "base_bdev": "aio_bdev", 00:08:44.626 "thin_provision": false, 00:08:44.626 "num_allocated_clusters": 38, 00:08:44.626 "snapshot": false, 00:08:44.626 "clone": false, 00:08:44.626 "esnap_clone": false 00:08:44.626 } 00:08:44.626 } 00:08:44.626 } 00:08:44.626 ] 00:08:44.626 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:44.626 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:44.626 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:44.885 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:44.885 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:44.885 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:44.885 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:44.885 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4fe98a3d-1c45-474f-84a3-cea0e18b6793 00:08:45.144 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd8fb1a8-ecc5-42ba-9cd4-53a423f4ca49 00:08:45.403 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.662 00:08:45.662 real 0m15.403s 00:08:45.662 user 0m15.036s 00:08:45.662 sys 0m1.435s 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 ************************************ 00:08:45.662 END TEST lvs_grow_clean 00:08:45.662 ************************************ 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 ************************************ 00:08:45.662 START TEST lvs_grow_dirty 00:08:45.662 ************************************ 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:45.662 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.921 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:45.921 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:46.180 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=235bbacc-e03b-40b4-a230-981088f4373d 00:08:46.180 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:46.180 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:08:46.180 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:46.180 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:46.180 16:33:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 235bbacc-e03b-40b4-a230-981088f4373d lvol 150 00:08:46.437 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e9853297-fd13-4871-ab3a-8a5bd92db734 00:08:46.437 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.437 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:46.695 [2024-10-14 16:33:51.173562] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:46.695 [2024-10-14 16:33:51.173613] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:46.695 true 00:08:46.695 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:08:46.695 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:46.953 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:46.953 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.953 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9853297-fd13-4871-ab3a-8a5bd92db734 00:08:47.212 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:47.470 [2024-10-14 16:33:51.907749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.470 16:33:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=401838 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 401838 /var/tmp/bdevperf.sock 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 401838 ']' 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:47.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.470 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:47.733 [2024-10-14 16:33:52.136042] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:08:47.733 [2024-10-14 16:33:52.136086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid401838 ] 00:08:47.733 [2024-10-14 16:33:52.200271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.733 [2024-10-14 16:33:52.239946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.733 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.733 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:47.733 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:48.361 Nvme0n1 00:08:48.361 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:48.361 [ 00:08:48.361 { 00:08:48.361 "name": "Nvme0n1", 00:08:48.361 "aliases": [ 00:08:48.361 "e9853297-fd13-4871-ab3a-8a5bd92db734" 00:08:48.361 ], 00:08:48.361 "product_name": "NVMe disk", 00:08:48.361 "block_size": 4096, 00:08:48.361 "num_blocks": 38912, 00:08:48.361 "uuid": "e9853297-fd13-4871-ab3a-8a5bd92db734", 00:08:48.361 "numa_id": 1, 00:08:48.361 "assigned_rate_limits": { 00:08:48.361 "rw_ios_per_sec": 0, 00:08:48.361 "rw_mbytes_per_sec": 0, 00:08:48.361 "r_mbytes_per_sec": 0, 00:08:48.361 "w_mbytes_per_sec": 0 00:08:48.361 }, 00:08:48.361 "claimed": false, 00:08:48.361 "zoned": false, 00:08:48.361 "supported_io_types": { 00:08:48.361 "read": true, 00:08:48.361 "write": true, 00:08:48.361 "unmap": true, 00:08:48.361 "flush": true, 00:08:48.361 "reset": true, 00:08:48.361 "nvme_admin": true, 00:08:48.361 "nvme_io": true, 00:08:48.361 "nvme_io_md": false, 00:08:48.361 "write_zeroes": true, 00:08:48.361 "zcopy": false, 00:08:48.361 "get_zone_info": false, 00:08:48.361 "zone_management": false, 00:08:48.361 "zone_append": false, 00:08:48.361 "compare": true, 00:08:48.361 "compare_and_write": true, 00:08:48.361 "abort": true, 00:08:48.361 "seek_hole": false, 00:08:48.361 "seek_data": false, 00:08:48.361 "copy": true, 00:08:48.361 "nvme_iov_md": false 00:08:48.361 }, 00:08:48.361 "memory_domains": [ 00:08:48.361 { 00:08:48.361 "dma_device_id": "system", 00:08:48.361 "dma_device_type": 1 00:08:48.361 } 00:08:48.361 ], 00:08:48.361 "driver_specific": { 00:08:48.361 "nvme": [ 00:08:48.361 { 00:08:48.361 "trid": { 00:08:48.361 "trtype": "TCP", 00:08:48.361 "adrfam": "IPv4", 00:08:48.361 "traddr": "10.0.0.2", 00:08:48.361 "trsvcid": "4420", 00:08:48.361 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:48.361 }, 00:08:48.361 "ctrlr_data": { 00:08:48.361 "cntlid": 1, 00:08:48.361 "vendor_id": "0x8086", 00:08:48.361 "model_number": "SPDK bdev Controller", 00:08:48.361 "serial_number": "SPDK0", 00:08:48.361 "firmware_revision": "25.01", 00:08:48.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.361 "oacs": { 00:08:48.361 "security": 0, 00:08:48.361 "format": 0, 00:08:48.361 "firmware": 0, 00:08:48.361 "ns_manage": 0 00:08:48.361 }, 00:08:48.361 "multi_ctrlr": true, 00:08:48.361 "ana_reporting": false 00:08:48.361 }, 00:08:48.361 "vs": { 00:08:48.361 "nvme_version": "1.3" 00:08:48.361 }, 00:08:48.361 "ns_data": { 00:08:48.361 "id": 1, 00:08:48.361 "can_share": true 00:08:48.361 } 00:08:48.361 } 00:08:48.361 ], 00:08:48.361 "mp_policy": "active_passive" 00:08:48.361 } 00:08:48.361 } 00:08:48.361 ] 00:08:48.361 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=401866 00:08:48.361 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.361 16:33:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:48.361 Running I/O for 10 seconds... 00:08:49.747 Latency(us) 00:08:49.747 [2024-10-14T14:33:54.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.747 Nvme0n1 : 1.00 23453.00 91.61 0.00 0.00 0.00 0.00 0.00 00:08:49.747 [2024-10-14T14:33:54.381Z] =================================================================================================================== 00:08:49.747 [2024-10-14T14:33:54.381Z] Total : 23453.00 91.61 0.00 0.00 0.00 0.00 0.00 00:08:49.747 00:08:50.314 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 235bbacc-e03b-40b4-a230-981088f4373d 00:08:50.572 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.572 Nvme0n1 : 2.00 23617.00 92.25 0.00 0.00 0.00 0.00 0.00 00:08:50.572 [2024-10-14T14:33:55.206Z] =================================================================================================================== 00:08:50.572 [2024-10-14T14:33:55.206Z] Total : 23617.00 92.25 0.00 0.00 0.00 0.00 0.00 00:08:50.572 00:08:50.572 true 00:08:50.572 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:08:50.572 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:50.831 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:50.831 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:50.831 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 401866 00:08:51.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.397 Nvme0n1 : 3.00 23698.67 92.57 0.00 0.00 0.00 0.00 0.00 00:08:51.397 [2024-10-14T14:33:56.031Z] =================================================================================================================== 00:08:51.397 [2024-10-14T14:33:56.031Z] Total : 23698.67 92.57 0.00 0.00 0.00 0.00 0.00 00:08:51.397 00:08:52.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.771 Nvme0n1 : 4.00 23770.25 92.85 0.00 0.00 0.00 0.00 0.00 00:08:52.771 [2024-10-14T14:33:57.405Z] =================================================================================================================== 00:08:52.771 [2024-10-14T14:33:57.405Z] Total : 23770.25 92.85 0.00 0.00 0.00 0.00 0.00 00:08:52.771 00:08:53.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.706 Nvme0n1 : 5.00 23830.20 93.09 0.00 0.00 0.00 0.00 0.00 00:08:53.706 [2024-10-14T14:33:58.340Z] =================================================================================================================== 00:08:53.706 [2024-10-14T14:33:58.340Z] Total : 23830.20 93.09 0.00 0.00 0.00 0.00 0.00 00:08:53.706 00:08:54.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.643 Nvme0n1 : 6.00 23859.83 93.20 0.00 0.00 0.00 0.00 0.00 00:08:54.643 [2024-10-14T14:33:59.277Z] =================================================================================================================== 00:08:54.643 [2024-10-14T14:33:59.277Z] Total : 23859.83 93.20 0.00 0.00 0.00 0.00 0.00 00:08:54.643 00:08:55.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.578 Nvme0n1 : 7.00 23891.14 93.32 0.00 0.00 0.00 0.00 0.00 00:08:55.578 [2024-10-14T14:34:00.212Z] =================================================================================================================== 00:08:55.578 [2024-10-14T14:34:00.212Z] Total : 23891.14 93.32 0.00 0.00 0.00 0.00 0.00 00:08:55.578 00:08:56.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.513 Nvme0n1 : 8.00 23891.38 93.33 0.00 0.00 0.00 0.00 0.00 00:08:56.513 [2024-10-14T14:34:01.147Z] =================================================================================================================== 00:08:56.513 [2024-10-14T14:34:01.147Z] Total : 23891.38 93.33 0.00 0.00 0.00 0.00 0.00 00:08:56.513 00:08:57.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.449 Nvme0n1 : 9.00 23914.56 93.42 0.00 0.00 0.00 0.00 0.00 00:08:57.449 [2024-10-14T14:34:02.083Z] =================================================================================================================== 00:08:57.449 [2024-10-14T14:34:02.083Z] Total : 23914.56 93.42 0.00 0.00 0.00 0.00 0.00 00:08:57.449 00:08:58.383 00:08:58.383 Latency(us) 00:08:58.383 [2024-10-14T14:34:03.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.383 Nvme0n1 : 10.00 23933.46 93.49 0.00 0.00 5345.44 3167.57 11109.91 00:08:58.383 [2024-10-14T14:34:03.017Z] =================================================================================================================== 00:08:58.383 [2024-10-14T14:34:03.017Z] Total : 23933.46 93.49 0.00 0.00 5345.44 3167.57 11109.91 00:08:58.383 { 00:08:58.383 "results": [ 00:08:58.383 { 00:08:58.383 "job": "Nvme0n1", 00:08:58.383 "core_mask": "0x2", 00:08:58.383 "workload": "randwrite", 00:08:58.383 "status": "finished", 00:08:58.383 "queue_depth": 128, 00:08:58.383 "io_size": 4096, 00:08:58.383 "runtime": 10.001271, 00:08:58.383 "iops": 23933.458057480893, 00:08:58.383 "mibps": 93.49007053703474, 00:08:58.384 "io_failed": 0, 00:08:58.384 "io_timeout": 0, 00:08:58.384 "avg_latency_us": 5345.4370509592345, 00:08:58.384 "min_latency_us": 3167.5733333333333, 00:08:58.384 "max_latency_us": 11109.91238095238 00:08:58.384 } 00:08:58.384 ], 00:08:58.384 "core_count": 1 00:08:58.384 } 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 401838 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 401838 ']' 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 401838 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 401838 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 401838' 00:08:58.642 killing process with pid 401838 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 401838 00:08:58.642 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.642 00:08:58.642 Latency(us) 00:08:58.642 [2024-10-14T14:34:03.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.642 [2024-10-14T14:34:03.276Z] =================================================================================================================== 00:08:58.642 [2024-10-14T14:34:03.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 401838 00:08:58.642 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.901 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:59.159 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:08:59.159 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:59.159 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:59.159 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:59.159 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 398741 00:08:59.159 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 398741 00:08:59.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 398741 Killed "${NVMF_APP[@]}" "$@" 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=403710 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 403710 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 403710 ']' 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.418 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.418 [2024-10-14 16:34:03.887341] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:08:59.418 [2024-10-14 16:34:03.887388] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.418 [2024-10-14 16:34:03.961281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.418 [2024-10-14 16:34:04.001264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.418 [2024-10-14 16:34:04.001299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.418 [2024-10-14 16:34:04.001306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.418 [2024-10-14 16:34:04.001312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.418 [2024-10-14 16:34:04.001317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.418 [2024-10-14 16:34:04.001858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.677 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.677 [2024-10-14 16:34:04.302975] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:59.677 [2024-10-14 16:34:04.303055] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:59.677 [2024-10-14 16:34:04.303080] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e9853297-fd13-4871-ab3a-8a5bd92db734 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e9853297-fd13-4871-ab3a-8a5bd92db734 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.936 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9853297-fd13-4871-ab3a-8a5bd92db734 -t 2000 00:09:00.194 [ 00:09:00.194 { 00:09:00.194 "name": "e9853297-fd13-4871-ab3a-8a5bd92db734", 00:09:00.194 "aliases": [ 00:09:00.194 "lvs/lvol" 00:09:00.194 ], 00:09:00.194 "product_name": "Logical Volume", 00:09:00.194 "block_size": 4096, 00:09:00.194 "num_blocks": 38912, 00:09:00.194 "uuid": "e9853297-fd13-4871-ab3a-8a5bd92db734", 00:09:00.194 "assigned_rate_limits": { 00:09:00.194 "rw_ios_per_sec": 0, 00:09:00.194 "rw_mbytes_per_sec": 0, 00:09:00.194 "r_mbytes_per_sec": 0, 00:09:00.194 "w_mbytes_per_sec": 0 00:09:00.194 }, 00:09:00.194 "claimed": false, 00:09:00.194 "zoned": false, 00:09:00.194 "supported_io_types": { 00:09:00.194 "read": true, 00:09:00.194 "write": true, 00:09:00.194 "unmap": true, 00:09:00.194 "flush": false, 00:09:00.194 "reset": true, 00:09:00.194 "nvme_admin": false, 00:09:00.194 "nvme_io": false, 00:09:00.194 "nvme_io_md": false, 00:09:00.194 "write_zeroes": true, 00:09:00.194 "zcopy": false, 00:09:00.194 "get_zone_info": false, 00:09:00.194 "zone_management": false, 00:09:00.194 "zone_append": false, 00:09:00.194 "compare": false, 00:09:00.194 "compare_and_write": false, 00:09:00.194 "abort": false, 00:09:00.194 "seek_hole": true, 00:09:00.194 "seek_data": true, 00:09:00.194 "copy": false, 00:09:00.194 "nvme_iov_md": false 00:09:00.194 }, 00:09:00.194 "driver_specific": { 00:09:00.194 "lvol": { 00:09:00.194 "lvol_store_uuid": "235bbacc-e03b-40b4-a230-981088f4373d", 00:09:00.194 "base_bdev": "aio_bdev", 00:09:00.194 "thin_provision": false, 00:09:00.194 "num_allocated_clusters": 38, 00:09:00.194 "snapshot": false, 00:09:00.194 "clone": false, 00:09:00.194 "esnap_clone": false 00:09:00.194 } 00:09:00.194 } 00:09:00.194 } 00:09:00.194 ] 00:09:00.194 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:00.194 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:00.194 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:00.453 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:00.454 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:00.454 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:00.454 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:00.454 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.713 [2024-10-14 16:34:05.227893] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:00.713 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:00.972 request: 00:09:00.972 { 00:09:00.972 "uuid": "235bbacc-e03b-40b4-a230-981088f4373d", 00:09:00.972 "method": "bdev_lvol_get_lvstores", 00:09:00.972 "req_id": 1 00:09:00.972 } 00:09:00.972 Got JSON-RPC error response 00:09:00.972 response: 00:09:00.972 { 00:09:00.972 "code": -19, 00:09:00.972 "message": "No such device" 00:09:00.972 } 00:09:00.972 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:00.972 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:00.972 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:00.972 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:00.972 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.231 aio_bdev 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e9853297-fd13-4871-ab3a-8a5bd92db734 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e9853297-fd13-4871-ab3a-8a5bd92db734 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:01.231 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e9853297-fd13-4871-ab3a-8a5bd92db734 -t 2000 00:09:01.490 [ 00:09:01.490 { 00:09:01.490 "name": "e9853297-fd13-4871-ab3a-8a5bd92db734", 00:09:01.490 "aliases": [ 00:09:01.490 "lvs/lvol" 00:09:01.490 ], 00:09:01.490 "product_name": "Logical Volume", 00:09:01.490 "block_size": 4096, 00:09:01.490 "num_blocks": 38912, 00:09:01.490 "uuid": "e9853297-fd13-4871-ab3a-8a5bd92db734", 00:09:01.490 "assigned_rate_limits": { 00:09:01.490 "rw_ios_per_sec": 0, 00:09:01.490 "rw_mbytes_per_sec": 0, 00:09:01.490 "r_mbytes_per_sec": 0, 00:09:01.490 "w_mbytes_per_sec": 0 00:09:01.490 }, 00:09:01.490 "claimed": false, 00:09:01.490 "zoned": false, 00:09:01.490 "supported_io_types": { 00:09:01.490 "read": true, 00:09:01.490 "write": true, 00:09:01.490 "unmap": true, 00:09:01.490 "flush": false, 00:09:01.490 "reset": true, 00:09:01.490 "nvme_admin": false, 00:09:01.490 "nvme_io": false, 00:09:01.490 "nvme_io_md": false, 00:09:01.490 "write_zeroes": true, 00:09:01.490 "zcopy": false, 00:09:01.490 "get_zone_info": false, 00:09:01.490 "zone_management": false, 00:09:01.490 "zone_append": false, 00:09:01.490 "compare": false, 00:09:01.490 "compare_and_write": false, 00:09:01.490 "abort": false, 00:09:01.490 "seek_hole": true, 00:09:01.490 "seek_data": true, 00:09:01.490 "copy": false, 00:09:01.490 "nvme_iov_md": false 00:09:01.490 }, 00:09:01.490 "driver_specific": { 00:09:01.490 "lvol": { 00:09:01.490 "lvol_store_uuid": "235bbacc-e03b-40b4-a230-981088f4373d", 00:09:01.490 "base_bdev": "aio_bdev", 00:09:01.490 "thin_provision": false, 00:09:01.490 "num_allocated_clusters": 38, 00:09:01.490 "snapshot": false, 00:09:01.490 "clone": false, 00:09:01.490 "esnap_clone": false 00:09:01.490 } 00:09:01.490 } 00:09:01.490 } 00:09:01.490 ] 00:09:01.490 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:01.490 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:01.490 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:01.749 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:01.749 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:01.749 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:01.749 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:01.749 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9853297-fd13-4871-ab3a-8a5bd92db734 00:09:02.007 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 235bbacc-e03b-40b4-a230-981088f4373d 00:09:02.267 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.525 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.525 00:09:02.525 real 0m16.790s 00:09:02.525 user 0m43.387s 00:09:02.525 sys 0m3.738s 00:09:02.525 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.525 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.525 ************************************ 00:09:02.525 END TEST lvs_grow_dirty 00:09:02.525 ************************************ 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:02.525 nvmf_trace.0 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:02.525 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.526 rmmod nvme_tcp 00:09:02.526 rmmod nvme_fabrics 00:09:02.526 rmmod nvme_keyring 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 403710 ']' 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 403710 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 403710 ']' 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 403710 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.526 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 403710 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 403710' 00:09:02.785 killing process with pid 403710 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 403710 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 403710 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.785 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:05.320 00:09:05.320 real 0m41.468s 00:09:05.320 user 1m3.983s 00:09:05.320 sys 0m10.118s 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.320 ************************************ 00:09:05.320 END TEST nvmf_lvs_grow 00:09:05.320 ************************************ 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.320 ************************************ 00:09:05.320 START TEST nvmf_bdev_io_wait 00:09:05.320 ************************************ 00:09:05.320 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:05.320 * Looking for test storage... 00:09:05.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.321 --rc genhtml_branch_coverage=1 00:09:05.321 --rc genhtml_function_coverage=1 00:09:05.321 --rc genhtml_legend=1 00:09:05.321 --rc geninfo_all_blocks=1 00:09:05.321 --rc geninfo_unexecuted_blocks=1 00:09:05.321 00:09:05.321 ' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.321 --rc genhtml_branch_coverage=1 00:09:05.321 --rc genhtml_function_coverage=1 00:09:05.321 --rc genhtml_legend=1 00:09:05.321 --rc geninfo_all_blocks=1 00:09:05.321 --rc geninfo_unexecuted_blocks=1 00:09:05.321 00:09:05.321 ' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.321 --rc genhtml_branch_coverage=1 00:09:05.321 --rc genhtml_function_coverage=1 00:09:05.321 --rc genhtml_legend=1 00:09:05.321 --rc geninfo_all_blocks=1 00:09:05.321 --rc geninfo_unexecuted_blocks=1 00:09:05.321 00:09:05.321 ' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.321 --rc genhtml_branch_coverage=1 00:09:05.321 --rc genhtml_function_coverage=1 00:09:05.321 --rc genhtml_legend=1 00:09:05.321 --rc geninfo_all_blocks=1 00:09:05.321 --rc geninfo_unexecuted_blocks=1 00:09:05.321 00:09:05.321 ' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:05.321 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.322 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:11.894 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:11.894 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:11.894 Found net devices under 0000:86:00.0: cvl_0_0 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:11.894 Found net devices under 0000:86:00.1: cvl_0_1 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.894 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:09:11.895 00:09:11.895 --- 10.0.0.2 ping statistics --- 00:09:11.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.895 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:11.895 00:09:11.895 --- 10.0.0.1 ping statistics --- 00:09:11.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.895 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=407980 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 407980 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 407980 ']' 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 [2024-10-14 16:34:15.829175] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:11.895 [2024-10-14 16:34:15.829222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.895 [2024-10-14 16:34:15.904011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.895 [2024-10-14 16:34:15.947378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.895 [2024-10-14 16:34:15.947414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.895 [2024-10-14 16:34:15.947421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.895 [2024-10-14 16:34:15.947427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.895 [2024-10-14 16:34:15.947431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.895 [2024-10-14 16:34:15.949028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.895 [2024-10-14 16:34:15.949135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.895 [2024-10-14 16:34:15.949241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.895 [2024-10-14 16:34:15.949242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.895 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 [2024-10-14 16:34:16.081514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 Malloc0 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 [2024-10-14 16:34:16.128507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=408012 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=408014 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.895 { 00:09:11.895 "params": { 00:09:11.895 "name": "Nvme$subsystem", 00:09:11.895 "trtype": "$TEST_TRANSPORT", 00:09:11.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.895 "adrfam": "ipv4", 00:09:11.895 "trsvcid": "$NVMF_PORT", 00:09:11.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.895 "hdgst": ${hdgst:-false}, 00:09:11.895 "ddgst": ${ddgst:-false} 00:09:11.895 }, 00:09:11.895 "method": "bdev_nvme_attach_controller" 00:09:11.895 } 00:09:11.895 EOF 00:09:11.895 )") 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=408016 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:11.895 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.895 { 00:09:11.895 "params": { 00:09:11.896 "name": "Nvme$subsystem", 00:09:11.896 "trtype": "$TEST_TRANSPORT", 00:09:11.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "$NVMF_PORT", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.896 "hdgst": ${hdgst:-false}, 00:09:11.896 "ddgst": ${ddgst:-false} 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 } 00:09:11.896 EOF 00:09:11.896 )") 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=408019 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.896 { 00:09:11.896 "params": { 00:09:11.896 "name": "Nvme$subsystem", 00:09:11.896 "trtype": "$TEST_TRANSPORT", 00:09:11.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "$NVMF_PORT", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.896 "hdgst": ${hdgst:-false}, 00:09:11.896 "ddgst": ${ddgst:-false} 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 } 00:09:11.896 EOF 00:09:11.896 )") 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:11.896 { 00:09:11.896 "params": { 00:09:11.896 "name": "Nvme$subsystem", 00:09:11.896 "trtype": "$TEST_TRANSPORT", 00:09:11.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "$NVMF_PORT", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.896 "hdgst": ${hdgst:-false}, 00:09:11.896 "ddgst": ${ddgst:-false} 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 } 00:09:11.896 EOF 00:09:11.896 )") 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 408012 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.896 "params": { 00:09:11.896 "name": "Nvme1", 00:09:11.896 "trtype": "tcp", 00:09:11.896 "traddr": "10.0.0.2", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "4420", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.896 "hdgst": false, 00:09:11.896 "ddgst": false 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 }' 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.896 "params": { 00:09:11.896 "name": "Nvme1", 00:09:11.896 "trtype": "tcp", 00:09:11.896 "traddr": "10.0.0.2", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "4420", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.896 "hdgst": false, 00:09:11.896 "ddgst": false 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 }' 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.896 "params": { 00:09:11.896 "name": "Nvme1", 00:09:11.896 "trtype": "tcp", 00:09:11.896 "traddr": "10.0.0.2", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "4420", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.896 "hdgst": false, 00:09:11.896 "ddgst": false 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 }' 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:11.896 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:11.896 "params": { 00:09:11.896 "name": "Nvme1", 00:09:11.896 "trtype": "tcp", 00:09:11.896 "traddr": "10.0.0.2", 00:09:11.896 "adrfam": "ipv4", 00:09:11.896 "trsvcid": "4420", 00:09:11.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.896 "hdgst": false, 00:09:11.896 "ddgst": false 00:09:11.896 }, 00:09:11.896 "method": "bdev_nvme_attach_controller" 00:09:11.896 }' 00:09:11.896 [2024-10-14 16:34:16.177825] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:11.896 [2024-10-14 16:34:16.177872] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:11.896 [2024-10-14 16:34:16.181406] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:11.896 [2024-10-14 16:34:16.181405] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:11.896 [2024-10-14 16:34:16.181452] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-14 16:34:16.181452] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:11.896 --proc-type=auto ] 00:09:11.896 [2024-10-14 16:34:16.184361] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:11.896 [2024-10-14 16:34:16.184402] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:11.896 [2024-10-14 16:34:16.354080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.896 [2024-10-14 16:34:16.396424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:11.896 [2024-10-14 16:34:16.452366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.896 [2024-10-14 16:34:16.495125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.896 [2024-10-14 16:34:16.507262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.156 [2024-10-14 16:34:16.535334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:12.156 [2024-10-14 16:34:16.555096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.156 [2024-10-14 16:34:16.597366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.156 Running I/O for 1 seconds... 00:09:12.156 Running I/O for 1 seconds... 00:09:12.415 Running I/O for 1 seconds... 00:09:12.415 Running I/O for 1 seconds... 00:09:13.352 254240.00 IOPS, 993.12 MiB/s 00:09:13.352 Latency(us) 00:09:13.352 [2024-10-14T14:34:17.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.352 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:13.352 Nvme1n1 : 1.00 253855.54 991.62 0.00 0.00 501.41 222.35 1497.97 00:09:13.352 [2024-10-14T14:34:17.986Z] =================================================================================================================== 00:09:13.352 [2024-10-14T14:34:17.986Z] Total : 253855.54 991.62 0.00 0.00 501.41 222.35 1497.97 00:09:13.352 8247.00 IOPS, 32.21 MiB/s 00:09:13.352 Latency(us) 00:09:13.352 [2024-10-14T14:34:17.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.352 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:13.352 Nvme1n1 : 1.02 8278.34 32.34 0.00 0.00 15352.39 5991.86 27962.03 00:09:13.352 [2024-10-14T14:34:17.986Z] =================================================================================================================== 00:09:13.352 [2024-10-14T14:34:17.986Z] Total : 8278.34 32.34 0.00 0.00 15352.39 5991.86 27962.03 00:09:13.352 11598.00 IOPS, 45.30 MiB/s 00:09:13.352 Latency(us) 00:09:13.352 [2024-10-14T14:34:17.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.352 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:13.352 Nvme1n1 : 1.01 11638.77 45.46 0.00 0.00 10951.71 6522.39 21970.16 00:09:13.352 [2024-10-14T14:34:17.986Z] =================================================================================================================== 00:09:13.352 [2024-10-14T14:34:17.986Z] Total : 11638.77 45.46 0.00 0.00 10951.71 6522.39 21970.16 00:09:13.352 7761.00 IOPS, 30.32 MiB/s 00:09:13.352 Latency(us) 00:09:13.352 [2024-10-14T14:34:17.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.352 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:13.352 Nvme1n1 : 1.00 7869.69 30.74 0.00 0.00 16228.01 3136.37 37449.14 00:09:13.352 [2024-10-14T14:34:17.986Z] =================================================================================================================== 00:09:13.352 [2024-10-14T14:34:17.986Z] Total : 7869.69 30.74 0.00 0.00 16228.01 3136.37 37449.14 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 408014 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 408016 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 408019 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.352 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.353 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.612 rmmod nvme_tcp 00:09:13.612 rmmod nvme_fabrics 00:09:13.612 rmmod nvme_keyring 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 407980 ']' 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 407980 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 407980 ']' 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 407980 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 407980 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 407980' 00:09:13.612 killing process with pid 407980 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 407980 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 407980 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:13.612 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.871 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.780 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.781 00:09:15.781 real 0m10.827s 00:09:15.781 user 0m16.067s 00:09:15.781 sys 0m6.112s 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.781 ************************************ 00:09:15.781 END TEST nvmf_bdev_io_wait 00:09:15.781 ************************************ 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:15.781 ************************************ 00:09:15.781 START TEST nvmf_queue_depth 00:09:15.781 ************************************ 00:09:15.781 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:16.041 * Looking for test storage... 00:09:16.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.041 --rc genhtml_branch_coverage=1 00:09:16.041 --rc genhtml_function_coverage=1 00:09:16.041 --rc genhtml_legend=1 00:09:16.041 --rc geninfo_all_blocks=1 00:09:16.041 --rc geninfo_unexecuted_blocks=1 00:09:16.041 00:09:16.041 ' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.041 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.042 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:22.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:22.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:22.613 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:22.614 Found net devices under 0000:86:00.0: cvl_0_0 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:22.614 Found net devices under 0000:86:00.1: cvl_0_1 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:09:22.614 00:09:22.614 --- 10.0.0.2 ping statistics --- 00:09:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.614 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:09:22.614 00:09:22.614 --- 10.0.0.1 ping statistics --- 00:09:22.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.614 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=411829 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 411829 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 411829 ']' 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.614 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.614 [2024-10-14 16:34:26.631405] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:22.614 [2024-10-14 16:34:26.631460] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.614 [2024-10-14 16:34:26.706025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.614 [2024-10-14 16:34:26.746160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.614 [2024-10-14 16:34:26.746195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.614 [2024-10-14 16:34:26.746202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.614 [2024-10-14 16:34:26.746208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.614 [2024-10-14 16:34:26.746213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.614 [2024-10-14 16:34:26.746780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.872 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.872 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:22.872 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:22.872 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.873 [2024-10-14 16:34:27.501796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.873 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.131 Malloc0 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.131 [2024-10-14 16:34:27.551979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.131 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=412070 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 412070 /var/tmp/bdevperf.sock 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 412070 ']' 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:23.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.132 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.132 [2024-10-14 16:34:27.602773] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:23.132 [2024-10-14 16:34:27.602819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid412070 ] 00:09:23.132 [2024-10-14 16:34:27.671366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.132 [2024-10-14 16:34:27.713259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.390 NVMe0n1 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.390 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.391 Running I/O for 10 seconds... 00:09:25.701 12230.00 IOPS, 47.77 MiB/s [2024-10-14T14:34:31.300Z] 12277.50 IOPS, 47.96 MiB/s [2024-10-14T14:34:32.270Z] 12283.33 IOPS, 47.98 MiB/s [2024-10-14T14:34:33.205Z] 12287.25 IOPS, 48.00 MiB/s [2024-10-14T14:34:34.139Z] 12453.00 IOPS, 48.64 MiB/s [2024-10-14T14:34:35.074Z] 12461.33 IOPS, 48.68 MiB/s [2024-10-14T14:34:36.449Z] 12511.29 IOPS, 48.87 MiB/s [2024-10-14T14:34:37.385Z] 12529.62 IOPS, 48.94 MiB/s [2024-10-14T14:34:38.321Z] 12553.67 IOPS, 49.04 MiB/s [2024-10-14T14:34:38.321Z] 12566.20 IOPS, 49.09 MiB/s 00:09:33.687 Latency(us) 00:09:33.687 [2024-10-14T14:34:38.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.687 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:33.687 Verification LBA range: start 0x0 length 0x4000 00:09:33.687 NVMe0n1 : 10.06 12593.44 49.19 0.00 0.00 81056.48 19348.72 50930.83 00:09:33.687 [2024-10-14T14:34:38.321Z] =================================================================================================================== 00:09:33.687 [2024-10-14T14:34:38.321Z] Total : 12593.44 49.19 0.00 0.00 81056.48 19348.72 50930.83 00:09:33.687 { 00:09:33.687 "results": [ 00:09:33.687 { 00:09:33.687 "job": "NVMe0n1", 00:09:33.687 "core_mask": "0x1", 00:09:33.687 "workload": "verify", 00:09:33.687 "status": "finished", 00:09:33.687 "verify_range": { 00:09:33.687 "start": 0, 00:09:33.687 "length": 16384 00:09:33.687 }, 00:09:33.687 "queue_depth": 1024, 00:09:33.687 "io_size": 4096, 00:09:33.687 "runtime": 10.055394, 00:09:33.687 "iops": 12593.43989902335, 00:09:33.687 "mibps": 49.19312460555996, 00:09:33.687 "io_failed": 0, 00:09:33.687 "io_timeout": 0, 00:09:33.687 "avg_latency_us": 81056.48257851021, 00:09:33.687 "min_latency_us": 19348.72380952381, 00:09:33.687 "max_latency_us": 50930.834285714285 00:09:33.687 } 00:09:33.687 ], 00:09:33.687 "core_count": 1 00:09:33.687 } 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 412070 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 412070 ']' 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 412070 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 412070 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 412070' 00:09:33.687 killing process with pid 412070 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 412070 00:09:33.687 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.687 00:09:33.687 Latency(us) 00:09:33.687 [2024-10-14T14:34:38.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.687 [2024-10-14T14:34:38.321Z] =================================================================================================================== 00:09:33.687 [2024-10-14T14:34:38.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.687 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 412070 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.946 rmmod nvme_tcp 00:09:33.946 rmmod nvme_fabrics 00:09:33.946 rmmod nvme_keyring 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 411829 ']' 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 411829 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 411829 ']' 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 411829 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 411829 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 411829' 00:09:33.946 killing process with pid 411829 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 411829 00:09:33.946 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 411829 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.205 16:34:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.109 00:09:36.109 real 0m20.294s 00:09:36.109 user 0m23.717s 00:09:36.109 sys 0m6.106s 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.109 ************************************ 00:09:36.109 END TEST nvmf_queue_depth 00:09:36.109 ************************************ 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.109 16:34:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.368 ************************************ 00:09:36.368 START TEST nvmf_target_multipath 00:09:36.368 ************************************ 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:36.368 * Looking for test storage... 00:09:36.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:36.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.368 --rc genhtml_branch_coverage=1 00:09:36.368 --rc genhtml_function_coverage=1 00:09:36.368 --rc genhtml_legend=1 00:09:36.368 --rc geninfo_all_blocks=1 00:09:36.368 --rc geninfo_unexecuted_blocks=1 00:09:36.368 00:09:36.368 ' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:36.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.368 --rc genhtml_branch_coverage=1 00:09:36.368 --rc genhtml_function_coverage=1 00:09:36.368 --rc genhtml_legend=1 00:09:36.368 --rc geninfo_all_blocks=1 00:09:36.368 --rc geninfo_unexecuted_blocks=1 00:09:36.368 00:09:36.368 ' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:36.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.368 --rc genhtml_branch_coverage=1 00:09:36.368 --rc genhtml_function_coverage=1 00:09:36.368 --rc genhtml_legend=1 00:09:36.368 --rc geninfo_all_blocks=1 00:09:36.368 --rc geninfo_unexecuted_blocks=1 00:09:36.368 00:09:36.368 ' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:36.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.368 --rc genhtml_branch_coverage=1 00:09:36.368 --rc genhtml_function_coverage=1 00:09:36.368 --rc genhtml_legend=1 00:09:36.368 --rc geninfo_all_blocks=1 00:09:36.368 --rc geninfo_unexecuted_blocks=1 00:09:36.368 00:09:36.368 ' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.368 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.369 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.933 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.933 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.933 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.933 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.933 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:09:42.934 00:09:42.934 --- 10.0.0.2 ping statistics --- 00:09:42.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.934 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:42.934 00:09:42.934 --- 10.0.0.1 ping statistics --- 00:09:42.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.934 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:42.934 only one NIC for nvmf test 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.934 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.934 rmmod nvme_tcp 00:09:42.934 rmmod nvme_fabrics 00:09:42.934 rmmod nvme_keyring 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.934 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.839 00:09:44.839 real 0m8.409s 00:09:44.839 user 0m1.862s 00:09:44.839 sys 0m4.538s 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 ************************************ 00:09:44.839 END TEST nvmf_target_multipath 00:09:44.839 ************************************ 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 ************************************ 00:09:44.839 START TEST nvmf_zcopy 00:09:44.839 ************************************ 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:44.839 * Looking for test storage... 00:09:44.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.839 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:44.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.839 --rc genhtml_branch_coverage=1 00:09:44.839 --rc genhtml_function_coverage=1 00:09:44.839 --rc genhtml_legend=1 00:09:44.840 --rc geninfo_all_blocks=1 00:09:44.840 --rc geninfo_unexecuted_blocks=1 00:09:44.840 00:09:44.840 ' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.840 --rc genhtml_branch_coverage=1 00:09:44.840 --rc genhtml_function_coverage=1 00:09:44.840 --rc genhtml_legend=1 00:09:44.840 --rc geninfo_all_blocks=1 00:09:44.840 --rc geninfo_unexecuted_blocks=1 00:09:44.840 00:09:44.840 ' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.840 --rc genhtml_branch_coverage=1 00:09:44.840 --rc genhtml_function_coverage=1 00:09:44.840 --rc genhtml_legend=1 00:09:44.840 --rc geninfo_all_blocks=1 00:09:44.840 --rc geninfo_unexecuted_blocks=1 00:09:44.840 00:09:44.840 ' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:44.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.840 --rc genhtml_branch_coverage=1 00:09:44.840 --rc genhtml_function_coverage=1 00:09:44.840 --rc genhtml_legend=1 00:09:44.840 --rc geninfo_all_blocks=1 00:09:44.840 --rc geninfo_unexecuted_blocks=1 00:09:44.840 00:09:44.840 ' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.840 16:34:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.408 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.408 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.408 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:51.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:09:51.409 00:09:51.409 --- 10.0.0.2 ping statistics --- 00:09:51.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.409 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:09:51.409 00:09:51.409 --- 10.0.0.1 ping statistics --- 00:09:51.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.409 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=420974 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 420974 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 420974 ']' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 [2024-10-14 16:34:55.523015] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:51.409 [2024-10-14 16:34:55.523058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.409 [2024-10-14 16:34:55.594543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.409 [2024-10-14 16:34:55.633039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.409 [2024-10-14 16:34:55.633074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.409 [2024-10-14 16:34:55.633081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.409 [2024-10-14 16:34:55.633087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.409 [2024-10-14 16:34:55.633092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.409 [2024-10-14 16:34:55.633629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 [2024-10-14 16:34:55.779449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 [2024-10-14 16:34:55.803676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 malloc0 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:51.409 { 00:09:51.409 "params": { 00:09:51.409 "name": "Nvme$subsystem", 00:09:51.409 "trtype": "$TEST_TRANSPORT", 00:09:51.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.409 "adrfam": "ipv4", 00:09:51.409 "trsvcid": "$NVMF_PORT", 00:09:51.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.409 "hdgst": ${hdgst:-false}, 00:09:51.409 "ddgst": ${ddgst:-false} 00:09:51.409 }, 00:09:51.409 "method": "bdev_nvme_attach_controller" 00:09:51.409 } 00:09:51.409 EOF 00:09:51.409 )") 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:51.409 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:51.409 "params": { 00:09:51.409 "name": "Nvme1", 00:09:51.409 "trtype": "tcp", 00:09:51.409 "traddr": "10.0.0.2", 00:09:51.409 "adrfam": "ipv4", 00:09:51.409 "trsvcid": "4420", 00:09:51.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.409 "hdgst": false, 00:09:51.409 "ddgst": false 00:09:51.409 }, 00:09:51.409 "method": "bdev_nvme_attach_controller" 00:09:51.409 }' 00:09:51.409 [2024-10-14 16:34:55.889227] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:09:51.409 [2024-10-14 16:34:55.889267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420994 ] 00:09:51.410 [2024-10-14 16:34:55.957573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.410 [2024-10-14 16:34:55.999932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.668 Running I/O for 10 seconds... 00:09:53.539 8684.00 IOPS, 67.84 MiB/s [2024-10-14T14:34:59.548Z] 8741.50 IOPS, 68.29 MiB/s [2024-10-14T14:35:00.480Z] 8767.67 IOPS, 68.50 MiB/s [2024-10-14T14:35:01.416Z] 8777.50 IOPS, 68.57 MiB/s [2024-10-14T14:35:02.351Z] 8790.40 IOPS, 68.67 MiB/s [2024-10-14T14:35:03.287Z] 8802.83 IOPS, 68.77 MiB/s [2024-10-14T14:35:04.223Z] 8812.14 IOPS, 68.84 MiB/s [2024-10-14T14:35:05.599Z] 8817.25 IOPS, 68.88 MiB/s [2024-10-14T14:35:06.536Z] 8815.67 IOPS, 68.87 MiB/s [2024-10-14T14:35:06.536Z] 8821.70 IOPS, 68.92 MiB/s 00:10:01.902 Latency(us) 00:10:01.902 [2024-10-14T14:35:06.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.902 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:01.902 Verification LBA range: start 0x0 length 0x1000 00:10:01.902 Nvme1n1 : 10.01 8824.27 68.94 0.00 0.00 14464.38 1732.02 23842.62 00:10:01.902 [2024-10-14T14:35:06.536Z] =================================================================================================================== 00:10:01.902 [2024-10-14T14:35:06.536Z] Total : 8824.27 68.94 0.00 0.00 14464.38 1732.02 23842.62 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=422830 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:01.902 { 00:10:01.902 "params": { 00:10:01.902 "name": "Nvme$subsystem", 00:10:01.902 "trtype": "$TEST_TRANSPORT", 00:10:01.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.902 "adrfam": "ipv4", 00:10:01.902 "trsvcid": "$NVMF_PORT", 00:10:01.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.902 "hdgst": ${hdgst:-false}, 00:10:01.902 "ddgst": ${ddgst:-false} 00:10:01.902 }, 00:10:01.902 "method": "bdev_nvme_attach_controller" 00:10:01.902 } 00:10:01.902 EOF 00:10:01.902 )") 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:01.902 [2024-10-14 16:35:06.360571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.360611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:01.902 16:35:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:01.902 "params": { 00:10:01.902 "name": "Nvme1", 00:10:01.902 "trtype": "tcp", 00:10:01.902 "traddr": "10.0.0.2", 00:10:01.902 "adrfam": "ipv4", 00:10:01.902 "trsvcid": "4420", 00:10:01.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.902 "hdgst": false, 00:10:01.902 "ddgst": false 00:10:01.902 }, 00:10:01.902 "method": "bdev_nvme_attach_controller" 00:10:01.902 }' 00:10:01.902 [2024-10-14 16:35:06.372565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.372578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.384594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.384609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.396629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.396638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.398868] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:10:01.902 [2024-10-14 16:35:06.398907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422830 ] 00:10:01.902 [2024-10-14 16:35:06.408659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.408670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.420684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.420693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.432718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.432729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.444750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.444759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.456781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.456791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.466944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.902 [2024-10-14 16:35:06.468814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.468823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.480845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.480870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.492876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.902 [2024-10-14 16:35:06.492885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.902 [2024-10-14 16:35:06.504907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.903 [2024-10-14 16:35:06.504917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.903 [2024-10-14 16:35:06.508689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.903 [2024-10-14 16:35:06.516950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.903 [2024-10-14 16:35:06.516961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.903 [2024-10-14 16:35:06.528981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.903 [2024-10-14 16:35:06.529001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.541009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.541022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.553051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.553071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.565076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.565087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.577101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.577112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.589135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.589144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.601186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.601208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.613213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.613227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.625242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.625257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.637273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.637288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.649298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.649307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.661337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.661355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 Running I/O for 5 seconds... 00:10:02.162 [2024-10-14 16:35:06.677488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.677508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.691257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.691276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.704822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.704840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.718966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.718984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.732188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.732206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.746249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.746268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.759899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.759917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.773847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.773866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.162 [2024-10-14 16:35:06.787502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.162 [2024-10-14 16:35:06.787520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.801713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.801732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.815292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.815310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.828892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.828910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.842614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.842632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.856492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.856510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.870433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.870454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.884550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.884573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.895141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.895158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.909329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.909346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.922714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.922731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.936495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.936513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.949892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.949909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.964063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.964081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.978097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.978115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:06.991662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:06.991683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:07.005584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:07.005607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:07.019311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:07.019329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:07.032851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:07.032869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.421 [2024-10-14 16:35:07.046898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.421 [2024-10-14 16:35:07.046917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.679 [2024-10-14 16:35:07.061021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.679 [2024-10-14 16:35:07.061039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.679 [2024-10-14 16:35:07.074129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.679 [2024-10-14 16:35:07.074148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.679 [2024-10-14 16:35:07.087906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.679 [2024-10-14 16:35:07.087924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.679 [2024-10-14 16:35:07.101577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.679 [2024-10-14 16:35:07.101595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.115129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.115147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.128858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.128876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.142646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.142669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.156378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.156396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.170151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.170168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.183502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.183520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.197453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.197472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.211011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.211030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.224688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.224709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.238668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.238687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.252441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.252461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.266083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.266101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.280219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.280237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.294236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.294255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.680 [2024-10-14 16:35:07.308012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.680 [2024-10-14 16:35:07.308030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.938 [2024-10-14 16:35:07.322157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.938 [2024-10-14 16:35:07.322178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.938 [2024-10-14 16:35:07.332914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.938 [2024-10-14 16:35:07.332933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.938 [2024-10-14 16:35:07.348060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.938 [2024-10-14 16:35:07.348079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.938 [2024-10-14 16:35:07.359438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.938 [2024-10-14 16:35:07.359457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.938 [2024-10-14 16:35:07.373736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.373755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.387625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.387644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.401400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.401423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.415255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.415273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.429108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.429128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.442738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.442758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.456712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.456730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.470374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.470393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.484222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.484241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.497660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.497678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.511363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.511381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.520564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.520582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.534463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.534481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.547987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.548005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.939 [2024-10-14 16:35:07.561617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.939 [2024-10-14 16:35:07.561652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.575412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.575431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.589421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.589439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.603448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.603467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.617356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.617375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.631370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.631389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.644939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.644957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.658757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.658783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 16881.00 IOPS, 131.88 MiB/s [2024-10-14T14:35:07.832Z] [2024-10-14 16:35:07.672220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.672239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.685879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.685897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.699321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.699339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.713325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.713343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.727258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.727275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.740967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.740985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.754817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.754835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.768312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.768333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.782331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.782348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.796749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.796767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.807320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.807339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.198 [2024-10-14 16:35:07.821765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.198 [2024-10-14 16:35:07.821783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.456 [2024-10-14 16:35:07.835399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.456 [2024-10-14 16:35:07.835418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.849074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.849092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.862533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.862550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.876537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.876554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.890484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.890502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.904122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.904140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.917676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.917695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.931233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.931250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.946030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.946048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.961333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.961351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.975546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.975565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:07.989075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:07.989092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.002869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.002888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.016647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.016667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.030444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.030462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.044226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.044244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.058143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.058162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.071631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.071649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.457 [2024-10-14 16:35:08.085643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.457 [2024-10-14 16:35:08.085660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.099138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.099156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.113441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.113459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.124812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.124830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.138629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.138647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.151909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.151927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.165544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.165561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.179599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.179622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.190614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.190632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.204893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.204911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.218711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.218729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.232389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.232413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.245659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.245677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.259397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.259415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.715 [2024-10-14 16:35:08.272964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.715 [2024-10-14 16:35:08.272982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.716 [2024-10-14 16:35:08.286478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.716 [2024-10-14 16:35:08.286495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.716 [2024-10-14 16:35:08.299999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.716 [2024-10-14 16:35:08.300016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.716 [2024-10-14 16:35:08.309546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.716 [2024-10-14 16:35:08.309563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.716 [2024-10-14 16:35:08.323401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.716 [2024-10-14 16:35:08.323419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.716 [2024-10-14 16:35:08.336895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.716 [2024-10-14 16:35:08.336913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.716 [2024-10-14 16:35:08.350697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.716 [2024-10-14 16:35:08.350716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.364351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.364369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.377971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.377989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.391498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.391517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.404818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.404836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.418549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.418572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.432496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.432514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.446247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.446265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.460045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.460063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.474038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.474059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.487833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.487852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.501634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.501652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.515203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.515220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.528733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.528752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.542676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.542693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.556086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.556103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.569996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.570014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.584028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.584046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.597733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.597751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.611505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.611524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.625523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.625543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.639175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.639195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.652994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.653014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 [2024-10-14 16:35:08.667653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.667676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.066 16943.50 IOPS, 132.37 MiB/s [2024-10-14T14:35:08.700Z] [2024-10-14 16:35:08.683369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.066 [2024-10-14 16:35:08.683392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.698117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.698137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.713369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.713389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.727591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.727627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.741325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.741344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.755118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.755136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.768934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.768953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.782952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.782971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.797217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.797235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.810712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.810730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.824378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.824397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.838337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.838355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.852256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.852275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.865926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.865944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.879561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.879579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.893512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.893531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.906720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.906738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.920728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.920748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.929505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.929524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.943610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.943633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.957251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.957269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.359 [2024-10-14 16:35:08.970906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.359 [2024-10-14 16:35:08.970925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:08.985066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:08.985085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:08.998660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:08.998679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.012465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.012482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.025804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.025822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.039434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.039452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.053168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.053185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.066769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.066787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.080498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.080516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.094392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.094410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.108317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.108335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.121885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.121904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.135706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.135725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.149205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.149223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.162657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.162676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.176478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.176496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.190543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.190562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.204314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.204333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.217757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.217776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.231533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.231552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.618 [2024-10-14 16:35:09.245178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.618 [2024-10-14 16:35:09.245196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.258697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.258715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.272437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.272456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.286227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.286244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.299872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.299890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.313649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.313667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.327751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.327769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.341359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.341377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.355166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.355185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.369000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.369018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.382943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.382962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.396946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.396965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.410887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.410905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.424654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.424672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.438476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.438494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.452669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.452687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.466104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.466122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.480467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.480485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.496182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.496201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.877 [2024-10-14 16:35:09.510260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.877 [2024-10-14 16:35:09.510279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.524248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.524266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.538001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.538020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.551768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.551786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.565570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.565587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.579068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.579086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.592986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.593004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.606920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.606937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.620772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.620789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.634519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.634537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.648312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.648330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.662070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.662088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 16975.33 IOPS, 132.62 MiB/s [2024-10-14T14:35:09.770Z] [2024-10-14 16:35:09.675753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.675771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.689397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.689416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.703063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.703081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.716928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.716946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.730712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.730730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.745329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.745347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.136 [2024-10-14 16:35:09.759005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.136 [2024-10-14 16:35:09.759022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.772949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.772967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.786697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.786715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.800549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.800567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.814494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.814512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.824805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.824823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.838684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.838703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.852302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.852320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.865921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.865939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.880075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.880093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.893798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.893816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.907849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.907867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.918641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.918658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.932951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.932969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.946477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.946495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.960214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.960231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.973835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.973857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:09.987466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:09.987485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:10.001527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:10.001546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:10.015821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:10.015840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.395 [2024-10-14 16:35:10.030193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.395 [2024-10-14 16:35:10.030213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.041256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.041275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.055626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.055645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.068916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.068935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.082698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.082716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.096868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.096887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.111277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.111295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.127132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.127150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.141470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.141489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.155179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.155197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.168680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.168698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.182144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.182161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.196022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.196040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.210129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.210148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.223803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.223822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.237637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.237662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.251509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.251527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.265455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.265473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.654 [2024-10-14 16:35:10.278949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.654 [2024-10-14 16:35:10.278967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.292737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.292756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.306727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.306746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.320536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.320555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.334178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.334196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.348352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.348371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.362058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.362077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.376058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.376076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.389508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.389526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.403223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.403242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.416825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.416844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.430512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.430530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.444470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.444487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.458223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.458241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.471955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.471973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.485517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.485535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.499552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.499576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.513421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.513439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.527062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.527079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.913 [2024-10-14 16:35:10.540784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.913 [2024-10-14 16:35:10.540802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.554277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.554294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.568222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.568240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.582457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.582476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.596455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.596473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.610944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.610961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.624705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.624723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.638335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.638353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.651941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.651958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.665458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.665476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 16969.50 IOPS, 132.57 MiB/s [2024-10-14T14:35:10.806Z] [2024-10-14 16:35:10.679137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.679156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.692832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.692851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.706702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.706722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.720293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.720311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.734746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.734764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.745487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.745505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.759326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.759344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.772689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.772707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.786310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.786329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.172 [2024-10-14 16:35:10.800467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.172 [2024-10-14 16:35:10.800485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.809642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.809659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.823492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.823510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.837175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.837193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.850397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.850415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.864009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.864027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.877864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.877882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.891657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.891676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.905171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.905189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.918712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.918730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.932366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.932384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.941458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.941475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.955560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.955577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.969655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.969673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.980770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.430 [2024-10-14 16:35:10.980787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.430 [2024-10-14 16:35:10.994882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.431 [2024-10-14 16:35:10.994900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.431 [2024-10-14 16:35:11.008661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.431 [2024-10-14 16:35:11.008679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.431 [2024-10-14 16:35:11.022308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.431 [2024-10-14 16:35:11.022327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.431 [2024-10-14 16:35:11.036376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.431 [2024-10-14 16:35:11.036394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.431 [2024-10-14 16:35:11.050041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.431 [2024-10-14 16:35:11.050059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.431 [2024-10-14 16:35:11.063740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.431 [2024-10-14 16:35:11.063758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.077735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.077753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.091493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.091512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.105512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.105530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.119213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.119232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.132624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.132642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.146167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.146185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.159950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.159968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.173878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.173896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.187670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.187688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.201575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.201593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.214802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.214820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.228541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.228559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.242414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.242432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.256052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.256069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.269783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.269801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.284104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.284122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.297999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.298017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.689 [2024-10-14 16:35:11.311626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.689 [2024-10-14 16:35:11.311643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.325920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.325938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.339412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.339430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.353424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.353443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.367203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.367220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.381464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.381483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.391739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.391758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.405557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.405576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.419446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.419465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.433003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.433023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.446746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.446766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.460680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.460699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.474610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.474629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.488253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.488272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.502182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.502200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.948 [2024-10-14 16:35:11.515577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.948 [2024-10-14 16:35:11.515620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.949 [2024-10-14 16:35:11.529198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.949 [2024-10-14 16:35:11.529217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.949 [2024-10-14 16:35:11.542655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.949 [2024-10-14 16:35:11.542673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.949 [2024-10-14 16:35:11.556372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.949 [2024-10-14 16:35:11.556390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.949 [2024-10-14 16:35:11.570617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.949 [2024-10-14 16:35:11.570636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.949 [2024-10-14 16:35:11.584313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.949 [2024-10-14 16:35:11.584332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.207 [2024-10-14 16:35:11.598168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.207 [2024-10-14 16:35:11.598186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.207 [2024-10-14 16:35:11.611779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.207 [2024-10-14 16:35:11.611798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.207 [2024-10-14 16:35:11.625253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.207 [2024-10-14 16:35:11.625271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.207 [2024-10-14 16:35:11.638699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.207 [2024-10-14 16:35:11.638718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.207 [2024-10-14 16:35:11.652216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.207 [2024-10-14 16:35:11.652235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.207 [2024-10-14 16:35:11.666172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.666191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 16987.40 IOPS, 132.71 MiB/s [2024-10-14T14:35:11.842Z] [2024-10-14 16:35:11.679523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.679541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 00:10:07.208 Latency(us) 00:10:07.208 [2024-10-14T14:35:11.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:07.208 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:07.208 Nvme1n1 : 5.01 16990.03 132.73 0.00 0.00 7527.01 3526.46 16352.79 00:10:07.208 [2024-10-14T14:35:11.842Z] =================================================================================================================== 00:10:07.208 [2024-10-14T14:35:11.842Z] Total : 16990.03 132.73 0.00 0.00 7527.01 3526.46 16352.79 00:10:07.208 [2024-10-14 16:35:11.689491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.689508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.701517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.701531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.713558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.713576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.725584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.725614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.737620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.737651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.749663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.749678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.761677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.761691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.773710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.773726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.785742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.785755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.797774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.797783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.809806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.809816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.821848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.821860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 [2024-10-14 16:35:11.833880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.208 [2024-10-14 16:35:11.833889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (422830) - No such process 00:10:07.208 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 422830 00:10:07.208 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.208 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.208 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.466 delay0 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.466 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:07.466 [2024-10-14 16:35:11.927558] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:14.025 [2024-10-14 16:35:18.149636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc78e0 is same with the state(6) to be set 00:10:14.025 Initializing NVMe Controllers 00:10:14.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:14.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:14.025 Initialization complete. Launching workers. 00:10:14.025 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 996 00:10:14.025 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1268, failed to submit 48 00:10:14.025 success 1099, unsuccessful 169, failed 0 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.025 rmmod nvme_tcp 00:10:14.025 rmmod nvme_fabrics 00:10:14.025 rmmod nvme_keyring 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 420974 ']' 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 420974 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 420974 ']' 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 420974 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 420974 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 420974' 00:10:14.025 killing process with pid 420974 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 420974 00:10:14.025 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 420974 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.026 16:35:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.928 00:10:15.928 real 0m31.266s 00:10:15.928 user 0m41.604s 00:10:15.928 sys 0m11.086s 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.928 ************************************ 00:10:15.928 END TEST nvmf_zcopy 00:10:15.928 ************************************ 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.928 16:35:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.188 ************************************ 00:10:16.188 START TEST nvmf_nmic 00:10:16.188 ************************************ 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:16.188 * Looking for test storage... 00:10:16.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.188 --rc genhtml_branch_coverage=1 00:10:16.188 --rc genhtml_function_coverage=1 00:10:16.188 --rc genhtml_legend=1 00:10:16.188 --rc geninfo_all_blocks=1 00:10:16.188 --rc geninfo_unexecuted_blocks=1 00:10:16.188 00:10:16.188 ' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.188 --rc genhtml_branch_coverage=1 00:10:16.188 --rc genhtml_function_coverage=1 00:10:16.188 --rc genhtml_legend=1 00:10:16.188 --rc geninfo_all_blocks=1 00:10:16.188 --rc geninfo_unexecuted_blocks=1 00:10:16.188 00:10:16.188 ' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.188 --rc genhtml_branch_coverage=1 00:10:16.188 --rc genhtml_function_coverage=1 00:10:16.188 --rc genhtml_legend=1 00:10:16.188 --rc geninfo_all_blocks=1 00:10:16.188 --rc geninfo_unexecuted_blocks=1 00:10:16.188 00:10:16.188 ' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.188 --rc genhtml_branch_coverage=1 00:10:16.188 --rc genhtml_function_coverage=1 00:10:16.188 --rc genhtml_legend=1 00:10:16.188 --rc geninfo_all_blocks=1 00:10:16.188 --rc geninfo_unexecuted_blocks=1 00:10:16.188 00:10:16.188 ' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:16.188 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.189 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:22.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:22.749 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:22.749 Found net devices under 0000:86:00.0: cvl_0_0 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:22.749 Found net devices under 0000:86:00.1: cvl_0_1 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.749 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:10:22.750 00:10:22.750 --- 10.0.0.2 ping statistics --- 00:10:22.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.750 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:22.750 00:10:22.750 --- 10.0.0.1 ping statistics --- 00:10:22.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.750 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=428210 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 428210 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 428210 ']' 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.750 16:35:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 [2024-10-14 16:35:26.848120] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:10:22.750 [2024-10-14 16:35:26.848175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.750 [2024-10-14 16:35:26.924729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.750 [2024-10-14 16:35:26.967088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.750 [2024-10-14 16:35:26.967127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.750 [2024-10-14 16:35:26.967134] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.750 [2024-10-14 16:35:26.967140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.750 [2024-10-14 16:35:26.967144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.750 [2024-10-14 16:35:26.968732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.750 [2024-10-14 16:35:26.968840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.750 [2024-10-14 16:35:26.968968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.750 [2024-10-14 16:35:26.968969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 [2024-10-14 16:35:27.117733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 Malloc0 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 [2024-10-14 16:35:27.179160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:22.750 test case1: single bdev can't be used in multiple subsystems 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.750 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.750 [2024-10-14 16:35:27.203030] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:22.750 [2024-10-14 16:35:27.203050] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:22.750 [2024-10-14 16:35:27.203058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.750 request: 00:10:22.750 { 00:10:22.750 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:22.750 "namespace": { 00:10:22.750 "bdev_name": "Malloc0", 00:10:22.750 "no_auto_visible": false 00:10:22.751 }, 00:10:22.751 "method": "nvmf_subsystem_add_ns", 00:10:22.751 "req_id": 1 00:10:22.751 } 00:10:22.751 Got JSON-RPC error response 00:10:22.751 response: 00:10:22.751 { 00:10:22.751 "code": -32602, 00:10:22.751 "message": "Invalid parameters" 00:10:22.751 } 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:22.751 Adding namespace failed - expected result. 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:22.751 test case2: host connect to nvmf target in multiple paths 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.751 [2024-10-14 16:35:27.215175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.751 16:35:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.125 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:25.058 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.058 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:25.058 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.058 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:25.058 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:26.959 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.959 [global] 00:10:26.959 thread=1 00:10:26.959 invalidate=1 00:10:26.959 rw=write 00:10:26.959 time_based=1 00:10:26.959 runtime=1 00:10:26.959 ioengine=libaio 00:10:26.959 direct=1 00:10:26.959 bs=4096 00:10:26.959 iodepth=1 00:10:26.959 norandommap=0 00:10:26.959 numjobs=1 00:10:26.959 00:10:26.959 verify_dump=1 00:10:26.959 verify_backlog=512 00:10:26.959 verify_state_save=0 00:10:26.959 do_verify=1 00:10:26.959 verify=crc32c-intel 00:10:26.959 [job0] 00:10:26.959 filename=/dev/nvme0n1 00:10:26.959 Could not set queue depth (nvme0n1) 00:10:27.218 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.218 fio-3.35 00:10:27.218 Starting 1 thread 00:10:28.591 00:10:28.591 job0: (groupid=0, jobs=1): err= 0: pid=429283: Mon Oct 14 16:35:32 2024 00:10:28.591 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:10:28.591 slat (nsec): min=9429, max=24538, avg=22392.36, stdev=3012.60 00:10:28.591 clat (usec): min=40838, max=41064, avg=40967.22, stdev=65.69 00:10:28.591 lat (usec): min=40862, max=41087, avg=40989.61, stdev=64.46 00:10:28.591 clat percentiles (usec): 00:10:28.591 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:28.591 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:28.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:28.591 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:28.591 | 99.99th=[41157] 00:10:28.591 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:10:28.591 slat (usec): min=10, max=29691, avg=70.03, stdev=1311.67 00:10:28.591 clat (usec): min=116, max=333, avg=162.22, stdev=18.21 00:10:28.591 lat (usec): min=128, max=30011, avg=232.25, stdev=1318.75 00:10:28.591 clat percentiles (usec): 00:10:28.591 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 141], 20.00th=[ 155], 00:10:28.591 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:10:28.591 | 70.00th=[ 169], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 180], 00:10:28.591 | 99.00th=[ 210], 99.50th=[ 241], 99.90th=[ 334], 99.95th=[ 334], 00:10:28.591 | 99.99th=[ 334] 00:10:28.591 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:10:28.591 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:28.591 lat (usec) : 250=95.51%, 500=0.37% 00:10:28.591 lat (msec) : 50=4.12% 00:10:28.591 cpu : usr=0.29%, sys=0.98%, ctx=537, majf=0, minf=1 00:10:28.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:28.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.591 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:28.591 00:10:28.591 Run status group 0 (all jobs): 00:10:28.591 READ: bw=86.0KiB/s (88.1kB/s), 86.0KiB/s-86.0KiB/s (88.1kB/s-88.1kB/s), io=88.0KiB (90.1kB), run=1023-1023msec 00:10:28.591 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:10:28.591 00:10:28.591 Disk stats (read/write): 00:10:28.591 nvme0n1: ios=45/512, merge=0/0, ticks=1731/80, in_queue=1811, util=98.50% 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.591 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.591 rmmod nvme_tcp 00:10:28.591 rmmod nvme_fabrics 00:10:28.850 rmmod nvme_keyring 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 428210 ']' 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 428210 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 428210 ']' 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 428210 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 428210 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 428210' 00:10:28.850 killing process with pid 428210 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 428210 00:10:28.850 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 428210 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:29.108 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.109 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.109 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.109 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.109 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.012 00:10:31.012 real 0m14.995s 00:10:31.012 user 0m33.502s 00:10:31.012 sys 0m5.279s 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.012 ************************************ 00:10:31.012 END TEST nvmf_nmic 00:10:31.012 ************************************ 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.012 16:35:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.271 ************************************ 00:10:31.271 START TEST nvmf_fio_target 00:10:31.271 ************************************ 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:31.271 * Looking for test storage... 00:10:31.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.271 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:31.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.272 --rc genhtml_branch_coverage=1 00:10:31.272 --rc genhtml_function_coverage=1 00:10:31.272 --rc genhtml_legend=1 00:10:31.272 --rc geninfo_all_blocks=1 00:10:31.272 --rc geninfo_unexecuted_blocks=1 00:10:31.272 00:10:31.272 ' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:31.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.272 --rc genhtml_branch_coverage=1 00:10:31.272 --rc genhtml_function_coverage=1 00:10:31.272 --rc genhtml_legend=1 00:10:31.272 --rc geninfo_all_blocks=1 00:10:31.272 --rc geninfo_unexecuted_blocks=1 00:10:31.272 00:10:31.272 ' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:31.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.272 --rc genhtml_branch_coverage=1 00:10:31.272 --rc genhtml_function_coverage=1 00:10:31.272 --rc genhtml_legend=1 00:10:31.272 --rc geninfo_all_blocks=1 00:10:31.272 --rc geninfo_unexecuted_blocks=1 00:10:31.272 00:10:31.272 ' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:31.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.272 --rc genhtml_branch_coverage=1 00:10:31.272 --rc genhtml_function_coverage=1 00:10:31.272 --rc genhtml_legend=1 00:10:31.272 --rc geninfo_all_blocks=1 00:10:31.272 --rc geninfo_unexecuted_blocks=1 00:10:31.272 00:10:31.272 ' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:31.272 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.273 16:35:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:37.836 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:37.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:37.837 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:37.837 Found net devices under 0000:86:00.0: cvl_0_0 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:37.837 Found net devices under 0000:86:00.1: cvl_0_1 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:37.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:10:37.837 00:10:37.837 --- 10.0.0.2 ping statistics --- 00:10:37.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.837 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:10:37.837 00:10:37.837 --- 10.0.0.1 ping statistics --- 00:10:37.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.837 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:37.837 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=433052 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 433052 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 433052 ']' 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.838 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.838 [2024-10-14 16:35:41.908237] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:10:37.838 [2024-10-14 16:35:41.908284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.838 [2024-10-14 16:35:41.978924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.838 [2024-10-14 16:35:42.022029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.838 [2024-10-14 16:35:42.022066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.838 [2024-10-14 16:35:42.022074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.838 [2024-10-14 16:35:42.022080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.838 [2024-10-14 16:35:42.022085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.838 [2024-10-14 16:35:42.023677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.838 [2024-10-14 16:35:42.023798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.838 [2024-10-14 16:35:42.023906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.838 [2024-10-14 16:35:42.023907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.838 [2024-10-14 16:35:42.325043] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.838 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.096 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:38.096 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.355 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:38.355 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.613 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:38.613 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.614 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:38.614 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:38.872 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.130 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:39.130 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.388 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:39.388 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.646 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:39.646 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:39.646 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.905 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.905 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.163 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.163 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.421 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.421 [2024-10-14 16:35:45.029542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.679 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:40.679 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:40.937 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.312 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:42.312 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.312 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.312 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:42.313 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:42.313 16:35:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:44.213 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:44.213 [global] 00:10:44.213 thread=1 00:10:44.213 invalidate=1 00:10:44.213 rw=write 00:10:44.213 time_based=1 00:10:44.213 runtime=1 00:10:44.213 ioengine=libaio 00:10:44.213 direct=1 00:10:44.213 bs=4096 00:10:44.213 iodepth=1 00:10:44.213 norandommap=0 00:10:44.213 numjobs=1 00:10:44.213 00:10:44.213 verify_dump=1 00:10:44.213 verify_backlog=512 00:10:44.213 verify_state_save=0 00:10:44.213 do_verify=1 00:10:44.213 verify=crc32c-intel 00:10:44.213 [job0] 00:10:44.213 filename=/dev/nvme0n1 00:10:44.213 [job1] 00:10:44.213 filename=/dev/nvme0n2 00:10:44.213 [job2] 00:10:44.213 filename=/dev/nvme0n3 00:10:44.213 [job3] 00:10:44.213 filename=/dev/nvme0n4 00:10:44.213 Could not set queue depth (nvme0n1) 00:10:44.213 Could not set queue depth (nvme0n2) 00:10:44.213 Could not set queue depth (nvme0n3) 00:10:44.213 Could not set queue depth (nvme0n4) 00:10:44.473 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.473 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.473 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.473 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.473 fio-3.35 00:10:44.473 Starting 4 threads 00:10:45.847 00:10:45.847 job0: (groupid=0, jobs=1): err= 0: pid=434406: Mon Oct 14 16:35:50 2024 00:10:45.847 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:10:45.847 slat (nsec): min=9674, max=29659, avg=22405.68, stdev=3280.12 00:10:45.847 clat (usec): min=40834, max=41150, avg=40964.87, stdev=84.48 00:10:45.847 lat (usec): min=40856, max=41171, avg=40987.27, stdev=84.68 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:45.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:45.847 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:45.847 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:45.847 | 99.99th=[41157] 00:10:45.847 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:45.847 slat (nsec): min=10796, max=43929, avg=12193.25, stdev=2192.29 00:10:45.847 clat (usec): min=142, max=375, avg=186.19, stdev=21.45 00:10:45.847 lat (usec): min=153, max=419, avg=198.38, stdev=22.10 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:10:45.847 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:10:45.847 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:10:45.847 | 99.00th=[ 258], 99.50th=[ 347], 99.90th=[ 375], 99.95th=[ 375], 00:10:45.847 | 99.99th=[ 375] 00:10:45.847 bw ( KiB/s): min= 4096, max= 4096, per=16.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.847 lat (usec) : 250=94.57%, 500=1.31% 00:10:45.847 lat (msec) : 50=4.12% 00:10:45.847 cpu : usr=0.60%, sys=0.80%, ctx=535, majf=0, minf=1 00:10:45.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.847 job1: (groupid=0, jobs=1): err= 0: pid=434407: Mon Oct 14 16:35:50 2024 00:10:45.847 read: IOPS=2013, BW=8055KiB/s (8248kB/s)(8200KiB/1018msec) 00:10:45.847 slat (nsec): min=6967, max=39071, avg=8436.90, stdev=1585.72 00:10:45.847 clat (usec): min=170, max=41092, avg=256.36, stdev=1271.47 00:10:45.847 lat (usec): min=179, max=41115, avg=264.80, stdev=1271.73 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 202], 00:10:45.847 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:10:45.847 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 239], 00:10:45.847 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 3425], 99.95th=[40633], 00:10:45.847 | 99.99th=[41157] 00:10:45.847 write: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec); 0 zone resets 00:10:45.847 slat (nsec): min=9107, max=38295, avg=11584.01, stdev=1741.99 00:10:45.847 clat (usec): min=117, max=2667, avg=168.29, stdev=71.56 00:10:45.847 lat (usec): min=129, max=2683, avg=179.88, stdev=71.70 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:10:45.847 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 165], 00:10:45.847 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 202], 95.00th=[ 241], 00:10:45.847 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 367], 99.95th=[ 2278], 00:10:45.847 | 99.99th=[ 2671] 00:10:45.847 bw ( KiB/s): min= 8936, max=11544, per=42.42%, avg=10240.00, stdev=1844.13, samples=2 00:10:45.847 iops : min= 2234, max= 2886, avg=2560.00, stdev=461.03, samples=2 00:10:45.847 lat (usec) : 250=98.07%, 500=1.78%, 750=0.02%, 1000=0.02% 00:10:45.847 lat (msec) : 4=0.07%, 50=0.04% 00:10:45.847 cpu : usr=3.74%, sys=7.08%, ctx=4610, majf=0, minf=1 00:10:45.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.847 job2: (groupid=0, jobs=1): err= 0: pid=434408: Mon Oct 14 16:35:50 2024 00:10:45.847 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:10:45.847 slat (nsec): min=9951, max=23465, avg=21333.45, stdev=3126.85 00:10:45.847 clat (usec): min=40828, max=42101, avg=41023.37, stdev=252.06 00:10:45.847 lat (usec): min=40852, max=42123, avg=41044.70, stdev=251.90 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:45.847 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:45.847 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:45.847 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:45.847 | 99.99th=[42206] 00:10:45.847 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:45.847 slat (nsec): min=9207, max=40216, avg=10415.76, stdev=1725.52 00:10:45.847 clat (usec): min=139, max=420, avg=190.68, stdev=24.11 00:10:45.847 lat (usec): min=149, max=460, avg=201.10, stdev=24.66 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 163], 20.00th=[ 178], 00:10:45.847 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 194], 00:10:45.847 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 231], 00:10:45.847 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 420], 99.95th=[ 420], 00:10:45.847 | 99.99th=[ 420] 00:10:45.847 bw ( KiB/s): min= 4096, max= 4096, per=16.97%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.847 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.847 lat (usec) : 250=94.57%, 500=1.31% 00:10:45.847 lat (msec) : 50=4.12% 00:10:45.847 cpu : usr=0.10%, sys=0.70%, ctx=534, majf=0, minf=2 00:10:45.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.847 job3: (groupid=0, jobs=1): err= 0: pid=434409: Mon Oct 14 16:35:50 2024 00:10:45.847 read: IOPS=2410, BW=9642KiB/s (9874kB/s)(9652KiB/1001msec) 00:10:45.847 slat (nsec): min=6627, max=31372, avg=7617.81, stdev=1123.62 00:10:45.847 clat (usec): min=172, max=335, avg=221.95, stdev=25.93 00:10:45.847 lat (usec): min=179, max=345, avg=229.57, stdev=26.06 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:10:45.847 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 231], 00:10:45.847 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:10:45.847 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 285], 99.95th=[ 322], 00:10:45.847 | 99.99th=[ 334] 00:10:45.847 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:45.847 slat (nsec): min=9804, max=39977, avg=11186.81, stdev=1695.58 00:10:45.847 clat (usec): min=119, max=447, avg=158.78, stdev=20.96 00:10:45.847 lat (usec): min=130, max=458, avg=169.97, stdev=21.15 00:10:45.847 clat percentiles (usec): 00:10:45.847 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:10:45.847 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:10:45.847 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 196], 00:10:45.847 | 99.00th=[ 215], 99.50th=[ 231], 99.90th=[ 302], 99.95th=[ 359], 00:10:45.847 | 99.99th=[ 449] 00:10:45.847 bw ( KiB/s): min=12288, max=12288, per=50.90%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.847 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.847 lat (usec) : 250=90.71%, 500=9.29% 00:10:45.847 cpu : usr=2.60%, sys=4.80%, ctx=4974, majf=0, minf=1 00:10:45.847 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.847 issued rwts: total=2413,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.847 00:10:45.847 Run status group 0 (all jobs): 00:10:45.847 READ: bw=17.3MiB/s (18.1MB/s), 87.4KiB/s-9642KiB/s (89.5kB/s-9874kB/s), io=17.6MiB (18.5MB), run=1001-1018msec 00:10:45.847 WRITE: bw=23.6MiB/s (24.7MB/s), 2034KiB/s-9.99MiB/s (2083kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1018msec 00:10:45.847 00:10:45.847 Disk stats (read/write): 00:10:45.847 nvme0n1: ios=44/512, merge=0/0, ticks=1726/92, in_queue=1818, util=98.10% 00:10:45.847 nvme0n2: ios=2068/2130, merge=0/0, ticks=423/333, in_queue=756, util=87.09% 00:10:45.847 nvme0n3: ios=41/512, merge=0/0, ticks=838/101, in_queue=939, util=91.14% 00:10:45.847 nvme0n4: ios=2078/2239, merge=0/0, ticks=1422/339, in_queue=1761, util=98.53% 00:10:45.848 16:35:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:45.848 [global] 00:10:45.848 thread=1 00:10:45.848 invalidate=1 00:10:45.848 rw=randwrite 00:10:45.848 time_based=1 00:10:45.848 runtime=1 00:10:45.848 ioengine=libaio 00:10:45.848 direct=1 00:10:45.848 bs=4096 00:10:45.848 iodepth=1 00:10:45.848 norandommap=0 00:10:45.848 numjobs=1 00:10:45.848 00:10:45.848 verify_dump=1 00:10:45.848 verify_backlog=512 00:10:45.848 verify_state_save=0 00:10:45.848 do_verify=1 00:10:45.848 verify=crc32c-intel 00:10:45.848 [job0] 00:10:45.848 filename=/dev/nvme0n1 00:10:45.848 [job1] 00:10:45.848 filename=/dev/nvme0n2 00:10:45.848 [job2] 00:10:45.848 filename=/dev/nvme0n3 00:10:45.848 [job3] 00:10:45.848 filename=/dev/nvme0n4 00:10:45.848 Could not set queue depth (nvme0n1) 00:10:45.848 Could not set queue depth (nvme0n2) 00:10:45.848 Could not set queue depth (nvme0n3) 00:10:45.848 Could not set queue depth (nvme0n4) 00:10:46.106 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.106 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.106 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.106 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.106 fio-3.35 00:10:46.106 Starting 4 threads 00:10:47.505 00:10:47.505 job0: (groupid=0, jobs=1): err= 0: pid=434781: Mon Oct 14 16:35:51 2024 00:10:47.505 read: IOPS=51, BW=206KiB/s (211kB/s)(212KiB/1027msec) 00:10:47.505 slat (nsec): min=6662, max=22566, avg=9125.64, stdev=3267.91 00:10:47.505 clat (usec): min=213, max=42363, avg=17258.37, stdev=20350.20 00:10:47.505 lat (usec): min=223, max=42370, avg=17267.49, stdev=20351.50 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 215], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 262], 00:10:47.505 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 334], 60.00th=[40633], 00:10:47.505 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:47.505 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:47.505 | 99.99th=[42206] 00:10:47.505 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:47.505 slat (nsec): min=9028, max=69965, avg=11157.53, stdev=3674.25 00:10:47.505 clat (usec): min=124, max=367, avg=203.43, stdev=46.06 00:10:47.505 lat (usec): min=134, max=377, avg=214.59, stdev=45.98 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 161], 20.00th=[ 174], 00:10:47.505 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 200], 00:10:47.505 | 70.00th=[ 206], 80.00th=[ 221], 90.00th=[ 269], 95.00th=[ 326], 00:10:47.505 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 367], 99.95th=[ 367], 00:10:47.505 | 99.99th=[ 367] 00:10:47.505 bw ( KiB/s): min= 4096, max= 4096, per=17.15%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.505 lat (usec) : 250=80.53%, 500=15.58% 00:10:47.505 lat (msec) : 50=3.89% 00:10:47.505 cpu : usr=0.29%, sys=0.49%, ctx=566, majf=0, minf=1 00:10:47.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.505 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.505 job1: (groupid=0, jobs=1): err= 0: pid=434782: Mon Oct 14 16:35:51 2024 00:10:47.505 read: IOPS=2169, BW=8679KiB/s (8888kB/s)(8688KiB/1001msec) 00:10:47.505 slat (nsec): min=6524, max=28323, avg=7488.36, stdev=1150.60 00:10:47.505 clat (usec): min=186, max=41424, avg=256.72, stdev=884.17 00:10:47.505 lat (usec): min=193, max=41431, avg=264.21, stdev=884.17 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:10:47.505 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:10:47.505 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:47.505 | 99.00th=[ 277], 99.50th=[ 383], 99.90th=[ 717], 99.95th=[ 725], 00:10:47.505 | 99.99th=[41681] 00:10:47.505 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:47.505 slat (nsec): min=9393, max=38686, avg=10496.11, stdev=1128.79 00:10:47.505 clat (usec): min=114, max=342, avg=152.19, stdev=15.21 00:10:47.505 lat (usec): min=123, max=381, avg=162.68, stdev=15.39 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:10:47.505 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:10:47.505 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:10:47.505 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 253], 99.95th=[ 334], 00:10:47.505 | 99.99th=[ 343] 00:10:47.505 bw ( KiB/s): min=11904, max=11904, per=49.84%, avg=11904.00, stdev= 0.00, samples=1 00:10:47.505 iops : min= 2976, max= 2976, avg=2976.00, stdev= 0.00, samples=1 00:10:47.505 lat (usec) : 250=89.43%, 500=10.46%, 750=0.08% 00:10:47.505 lat (msec) : 50=0.02% 00:10:47.505 cpu : usr=2.00%, sys=4.70%, ctx=4733, majf=0, minf=1 00:10:47.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.505 issued rwts: total=2172,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.505 job2: (groupid=0, jobs=1): err= 0: pid=434783: Mon Oct 14 16:35:51 2024 00:10:47.505 read: IOPS=277, BW=1112KiB/s (1138kB/s)(1144KiB/1029msec) 00:10:47.505 slat (nsec): min=6988, max=24436, avg=8047.27, stdev=2127.04 00:10:47.505 clat (usec): min=175, max=43113, avg=3231.69, stdev=10717.92 00:10:47.505 lat (usec): min=182, max=43123, avg=3239.73, stdev=10719.32 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:10:47.505 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:10:47.505 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 285], 95.00th=[41157], 00:10:47.505 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:10:47.505 | 99.99th=[43254] 00:10:47.505 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:47.505 slat (nsec): min=9993, max=44281, avg=11566.54, stdev=2548.15 00:10:47.505 clat (usec): min=142, max=292, avg=183.09, stdev=20.89 00:10:47.505 lat (usec): min=153, max=325, avg=194.66, stdev=21.55 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:47.505 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:10:47.505 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 215], 00:10:47.505 | 99.00th=[ 231], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 293], 00:10:47.505 | 99.99th=[ 293] 00:10:47.505 bw ( KiB/s): min= 4096, max= 4096, per=17.15%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.505 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.505 lat (usec) : 250=93.23%, 500=4.14% 00:10:47.505 lat (msec) : 50=2.63% 00:10:47.505 cpu : usr=0.29%, sys=0.88%, ctx=799, majf=0, minf=1 00:10:47.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.505 issued rwts: total=286,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.505 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.505 job3: (groupid=0, jobs=1): err= 0: pid=434784: Mon Oct 14 16:35:51 2024 00:10:47.505 read: IOPS=2224, BW=8899KiB/s (9113kB/s)(8908KiB/1001msec) 00:10:47.505 slat (nsec): min=7402, max=37037, avg=8443.42, stdev=1155.09 00:10:47.505 clat (usec): min=173, max=2641, avg=231.79, stdev=60.52 00:10:47.505 lat (usec): min=181, max=2650, avg=240.23, stdev=60.56 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:10:47.505 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:10:47.505 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 262], 00:10:47.505 | 99.00th=[ 297], 99.50th=[ 478], 99.90th=[ 570], 99.95th=[ 906], 00:10:47.505 | 99.99th=[ 2638] 00:10:47.505 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:47.505 slat (nsec): min=10721, max=43627, avg=11827.83, stdev=1919.41 00:10:47.505 clat (usec): min=124, max=526, avg=164.30, stdev=23.35 00:10:47.505 lat (usec): min=136, max=539, avg=176.13, stdev=23.58 00:10:47.505 clat percentiles (usec): 00:10:47.505 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:47.505 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:10:47.505 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 204], 00:10:47.505 | 99.00th=[ 239], 99.50th=[ 262], 99.90th=[ 334], 99.95th=[ 523], 00:10:47.505 | 99.99th=[ 529] 00:10:47.505 bw ( KiB/s): min=10304, max=10304, per=43.14%, avg=10304.00, stdev= 0.00, samples=1 00:10:47.505 iops : min= 2576, max= 2576, avg=2576.00, stdev= 0.00, samples=1 00:10:47.505 lat (usec) : 250=93.19%, 500=6.62%, 750=0.15%, 1000=0.02% 00:10:47.505 lat (msec) : 4=0.02% 00:10:47.505 cpu : usr=3.50%, sys=8.30%, ctx=4788, majf=0, minf=1 00:10:47.505 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.506 issued rwts: total=2227,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.506 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.506 00:10:47.506 Run status group 0 (all jobs): 00:10:47.506 READ: bw=18.0MiB/s (18.9MB/s), 206KiB/s-8899KiB/s (211kB/s-9113kB/s), io=18.5MiB (19.4MB), run=1001-1029msec 00:10:47.506 WRITE: bw=23.3MiB/s (24.5MB/s), 1990KiB/s-9.99MiB/s (2038kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1029msec 00:10:47.506 00:10:47.506 Disk stats (read/write): 00:10:47.506 nvme0n1: ios=91/512, merge=0/0, ticks=728/102, in_queue=830, util=86.97% 00:10:47.506 nvme0n2: ios=2055/2048, merge=0/0, ticks=1227/311, in_queue=1538, util=96.35% 00:10:47.506 nvme0n3: ios=326/512, merge=0/0, ticks=1720/89, in_queue=1809, util=98.44% 00:10:47.506 nvme0n4: ios=2023/2048, merge=0/0, ticks=1327/307, in_queue=1634, util=97.28% 00:10:47.506 16:35:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:47.506 [global] 00:10:47.506 thread=1 00:10:47.506 invalidate=1 00:10:47.506 rw=write 00:10:47.506 time_based=1 00:10:47.506 runtime=1 00:10:47.506 ioengine=libaio 00:10:47.506 direct=1 00:10:47.506 bs=4096 00:10:47.506 iodepth=128 00:10:47.506 norandommap=0 00:10:47.506 numjobs=1 00:10:47.506 00:10:47.506 verify_dump=1 00:10:47.506 verify_backlog=512 00:10:47.506 verify_state_save=0 00:10:47.506 do_verify=1 00:10:47.506 verify=crc32c-intel 00:10:47.506 [job0] 00:10:47.506 filename=/dev/nvme0n1 00:10:47.506 [job1] 00:10:47.506 filename=/dev/nvme0n2 00:10:47.506 [job2] 00:10:47.506 filename=/dev/nvme0n3 00:10:47.506 [job3] 00:10:47.506 filename=/dev/nvme0n4 00:10:47.506 Could not set queue depth (nvme0n1) 00:10:47.506 Could not set queue depth (nvme0n2) 00:10:47.506 Could not set queue depth (nvme0n3) 00:10:47.506 Could not set queue depth (nvme0n4) 00:10:47.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.766 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.766 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.766 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.766 fio-3.35 00:10:47.766 Starting 4 threads 00:10:49.136 00:10:49.136 job0: (groupid=0, jobs=1): err= 0: pid=435183: Mon Oct 14 16:35:53 2024 00:10:49.136 read: IOPS=3042, BW=11.9MiB/s (12.5MB/s)(12.1MiB/1017msec) 00:10:49.136 slat (nsec): min=1399, max=15805k, avg=129841.72, stdev=962405.60 00:10:49.136 clat (usec): min=3760, max=50655, avg=15644.77, stdev=8035.08 00:10:49.136 lat (usec): min=3771, max=50662, avg=15774.61, stdev=8111.51 00:10:49.136 clat percentiles (usec): 00:10:49.136 | 1.00th=[ 4948], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:10:49.136 | 30.00th=[10552], 40.00th=[11863], 50.00th=[12256], 60.00th=[14091], 00:10:49.136 | 70.00th=[17433], 80.00th=[21890], 90.00th=[26346], 95.00th=[31589], 00:10:49.136 | 99.00th=[44827], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:10:49.136 | 99.99th=[50594] 00:10:49.136 write: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec); 0 zone resets 00:10:49.136 slat (usec): min=2, max=17814, avg=160.34, stdev=940.81 00:10:49.136 clat (usec): min=1624, max=107493, avg=22410.25, stdev=17491.07 00:10:49.136 lat (usec): min=1637, max=110971, avg=22570.59, stdev=17590.44 00:10:49.136 clat percentiles (msec): 00:10:49.136 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 9], 00:10:49.136 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 22], 00:10:49.136 | 70.00th=[ 23], 80.00th=[ 24], 90.00th=[ 44], 95.00th=[ 61], 00:10:49.136 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:10:49.136 | 99.99th=[ 108] 00:10:49.136 bw ( KiB/s): min=11448, max=16384, per=20.17%, avg=13916.00, stdev=3490.28, samples=2 00:10:49.136 iops : min= 2862, max= 4096, avg=3479.00, stdev=872.57, samples=2 00:10:49.136 lat (msec) : 2=0.03%, 4=1.08%, 10=20.96%, 20=42.66%, 50=31.48% 00:10:49.136 lat (msec) : 100=3.20%, 250=0.58% 00:10:49.136 cpu : usr=2.66%, sys=4.92%, ctx=370, majf=0, minf=1 00:10:49.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.136 issued rwts: total=3094,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.136 job1: (groupid=0, jobs=1): err= 0: pid=435195: Mon Oct 14 16:35:53 2024 00:10:49.136 read: IOPS=4560, BW=17.8MiB/s (18.7MB/s)(18.1MiB/1017msec) 00:10:49.136 slat (nsec): min=1285, max=14732k, avg=101926.72, stdev=773546.13 00:10:49.136 clat (usec): min=4176, max=38755, avg=12643.36, stdev=4551.63 00:10:49.136 lat (usec): min=4184, max=38770, avg=12745.28, stdev=4613.19 00:10:49.136 clat percentiles (usec): 00:10:49.136 | 1.00th=[ 5080], 5.00th=[ 7635], 10.00th=[ 8356], 20.00th=[ 9634], 00:10:49.136 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[12649], 00:10:49.136 | 70.00th=[13960], 80.00th=[16057], 90.00th=[18744], 95.00th=[22152], 00:10:49.136 | 99.00th=[27395], 99.50th=[27657], 99.90th=[30278], 99.95th=[32375], 00:10:49.136 | 99.99th=[38536] 00:10:49.136 write: IOPS=5034, BW=19.7MiB/s (20.6MB/s)(20.0MiB/1017msec); 0 zone resets 00:10:49.136 slat (usec): min=2, max=12547, avg=97.21, stdev=585.81 00:10:49.136 clat (usec): min=1608, max=103633, avg=13728.62, stdev=14357.39 00:10:49.136 lat (usec): min=1620, max=103644, avg=13825.82, stdev=14455.48 00:10:49.136 clat percentiles (msec): 00:10:49.136 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:10:49.136 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:10:49.136 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 17], 95.00th=[ 32], 00:10:49.136 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 104], 99.95th=[ 104], 00:10:49.136 | 99.99th=[ 104] 00:10:49.136 bw ( KiB/s): min=19704, max=20480, per=29.12%, avg=20092.00, stdev=548.71, samples=2 00:10:49.136 iops : min= 4926, max= 5120, avg=5023.00, stdev=137.18, samples=2 00:10:49.136 lat (msec) : 2=0.02%, 4=0.77%, 10=34.11%, 20=57.62%, 50=5.54% 00:10:49.136 lat (msec) : 100=1.87%, 250=0.07% 00:10:49.136 cpu : usr=4.72%, sys=5.02%, ctx=579, majf=0, minf=1 00:10:49.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.136 issued rwts: total=4638,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.136 job2: (groupid=0, jobs=1): err= 0: pid=435211: Mon Oct 14 16:35:53 2024 00:10:49.136 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:10:49.136 slat (nsec): min=1052, max=35572k, avg=163973.17, stdev=1333568.86 00:10:49.136 clat (usec): min=1756, max=69526, avg=19079.25, stdev=11896.24 00:10:49.136 lat (usec): min=1763, max=72818, avg=19243.22, stdev=12009.57 00:10:49.136 clat percentiles (usec): 00:10:49.136 | 1.00th=[ 2540], 5.00th=[ 7373], 10.00th=[ 9110], 20.00th=[10683], 00:10:49.136 | 30.00th=[12649], 40.00th=[12911], 50.00th=[14222], 60.00th=[17957], 00:10:49.136 | 70.00th=[21890], 80.00th=[24511], 90.00th=[35914], 95.00th=[44827], 00:10:49.136 | 99.00th=[61080], 99.50th=[61080], 99.90th=[69731], 99.95th=[69731], 00:10:49.136 | 99.99th=[69731] 00:10:49.136 write: IOPS=3451, BW=13.5MiB/s (14.1MB/s)(13.7MiB/1017msec); 0 zone resets 00:10:49.136 slat (nsec): min=1871, max=18768k, avg=129743.32, stdev=835776.66 00:10:49.136 clat (usec): min=1154, max=69495, avg=20049.49, stdev=9837.90 00:10:49.136 lat (usec): min=1164, max=69498, avg=20179.23, stdev=9887.52 00:10:49.136 clat percentiles (usec): 00:10:49.136 | 1.00th=[ 4047], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11731], 00:10:49.136 | 30.00th=[14615], 40.00th=[16909], 50.00th=[19006], 60.00th=[21365], 00:10:49.136 | 70.00th=[22414], 80.00th=[23462], 90.00th=[31589], 95.00th=[39060], 00:10:49.136 | 99.00th=[59507], 99.50th=[59507], 99.90th=[60031], 99.95th=[69731], 00:10:49.136 | 99.99th=[69731] 00:10:49.136 bw ( KiB/s): min=12288, max=14776, per=19.62%, avg=13532.00, stdev=1759.28, samples=2 00:10:49.136 iops : min= 3072, max= 3694, avg=3383.00, stdev=439.82, samples=2 00:10:49.136 lat (msec) : 2=0.15%, 4=0.73%, 10=10.00%, 20=47.43%, 50=38.79% 00:10:49.136 lat (msec) : 100=2.90% 00:10:49.136 cpu : usr=1.97%, sys=3.84%, ctx=315, majf=0, minf=2 00:10:49.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.136 issued rwts: total=3072,3510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.136 job3: (groupid=0, jobs=1): err= 0: pid=435217: Mon Oct 14 16:35:53 2024 00:10:49.136 read: IOPS=5034, BW=19.7MiB/s (20.6MB/s)(20.0MiB/1017msec) 00:10:49.136 slat (nsec): min=1107, max=14453k, avg=101990.23, stdev=784585.70 00:10:49.136 clat (usec): min=4099, max=37095, avg=13179.06, stdev=3989.72 00:10:49.136 lat (usec): min=4107, max=37109, avg=13281.05, stdev=4053.71 00:10:49.136 clat percentiles (usec): 00:10:49.136 | 1.00th=[ 5014], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:10:49.136 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:10:49.136 | 70.00th=[13042], 80.00th=[15926], 90.00th=[19530], 95.00th=[21627], 00:10:49.136 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26346], 99.95th=[30802], 00:10:49.136 | 99.99th=[36963] 00:10:49.136 write: IOPS=5236, BW=20.5MiB/s (21.5MB/s)(20.8MiB/1017msec); 0 zone resets 00:10:49.136 slat (usec): min=2, max=20008, avg=80.73, stdev=637.87 00:10:49.136 clat (usec): min=1785, max=36173, avg=11503.60, stdev=3701.44 00:10:49.136 lat (usec): min=1794, max=36193, avg=11584.33, stdev=3761.73 00:10:49.137 clat percentiles (usec): 00:10:49.137 | 1.00th=[ 2900], 5.00th=[ 5997], 10.00th=[ 7767], 20.00th=[ 8848], 00:10:49.137 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11338], 60.00th=[12125], 00:10:49.137 | 70.00th=[12387], 80.00th=[12780], 90.00th=[16188], 95.00th=[16712], 00:10:49.137 | 99.00th=[25297], 99.50th=[26084], 99.90th=[26346], 99.95th=[30802], 00:10:49.137 | 99.99th=[35914] 00:10:49.137 bw ( KiB/s): min=20464, max=21128, per=30.14%, avg=20796.00, stdev=469.52, samples=2 00:10:49.137 iops : min= 5116, max= 5282, avg=5199.00, stdev=117.38, samples=2 00:10:49.137 lat (msec) : 2=0.08%, 4=1.07%, 10=17.30%, 20=75.16%, 50=6.39% 00:10:49.137 cpu : usr=3.15%, sys=5.91%, ctx=456, majf=0, minf=1 00:10:49.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:49.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.137 issued rwts: total=5120,5326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.137 00:10:49.137 Run status group 0 (all jobs): 00:10:49.137 READ: bw=61.2MiB/s (64.1MB/s), 11.8MiB/s-19.7MiB/s (12.4MB/s-20.6MB/s), io=62.2MiB (65.2MB), run=1017-1017msec 00:10:49.137 WRITE: bw=67.4MiB/s (70.6MB/s), 13.5MiB/s-20.5MiB/s (14.1MB/s-21.5MB/s), io=68.5MiB (71.8MB), run=1017-1017msec 00:10:49.137 00:10:49.137 Disk stats (read/write): 00:10:49.137 nvme0n1: ios=2613/2967, merge=0/0, ticks=41050/64083, in_queue=105133, util=98.20% 00:10:49.137 nvme0n2: ios=4613/4759, merge=0/0, ticks=55930/48733, in_queue=104663, util=86.92% 00:10:49.137 nvme0n3: ios=2560/2855, merge=0/0, ticks=43635/55153, in_queue=98788, util=88.88% 00:10:49.137 nvme0n4: ios=4133/4377, merge=0/0, ticks=55293/49714, in_queue=105007, util=96.23% 00:10:49.137 16:35:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:49.137 [global] 00:10:49.137 thread=1 00:10:49.137 invalidate=1 00:10:49.137 rw=randwrite 00:10:49.137 time_based=1 00:10:49.137 runtime=1 00:10:49.137 ioengine=libaio 00:10:49.137 direct=1 00:10:49.137 bs=4096 00:10:49.137 iodepth=128 00:10:49.137 norandommap=0 00:10:49.137 numjobs=1 00:10:49.137 00:10:49.137 verify_dump=1 00:10:49.137 verify_backlog=512 00:10:49.137 verify_state_save=0 00:10:49.137 do_verify=1 00:10:49.137 verify=crc32c-intel 00:10:49.137 [job0] 00:10:49.137 filename=/dev/nvme0n1 00:10:49.137 [job1] 00:10:49.137 filename=/dev/nvme0n2 00:10:49.137 [job2] 00:10:49.137 filename=/dev/nvme0n3 00:10:49.137 [job3] 00:10:49.137 filename=/dev/nvme0n4 00:10:49.137 Could not set queue depth (nvme0n1) 00:10:49.137 Could not set queue depth (nvme0n2) 00:10:49.137 Could not set queue depth (nvme0n3) 00:10:49.137 Could not set queue depth (nvme0n4) 00:10:49.137 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.137 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.137 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.137 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.137 fio-3.35 00:10:49.137 Starting 4 threads 00:10:50.509 00:10:50.509 job0: (groupid=0, jobs=1): err= 0: pid=435646: Mon Oct 14 16:35:54 2024 00:10:50.509 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:10:50.509 slat (nsec): min=1420, max=16228k, avg=137677.27, stdev=990839.03 00:10:50.509 clat (usec): min=3953, max=48675, avg=16258.36, stdev=6210.40 00:10:50.509 lat (usec): min=3961, max=48682, avg=16396.04, stdev=6303.59 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10421], 00:10:50.509 | 30.00th=[12125], 40.00th=[13566], 50.00th=[14615], 60.00th=[17957], 00:10:50.509 | 70.00th=[19792], 80.00th=[21365], 90.00th=[23987], 95.00th=[26608], 00:10:50.509 | 99.00th=[33424], 99.50th=[36963], 99.90th=[48497], 99.95th=[48497], 00:10:50.509 | 99.99th=[48497] 00:10:50.509 write: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1007msec); 0 zone resets 00:10:50.509 slat (usec): min=2, max=10311, avg=119.38, stdev=553.25 00:10:50.509 clat (usec): min=2110, max=82501, avg=17342.19, stdev=11759.10 00:10:50.509 lat (usec): min=2123, max=82511, avg=17461.57, stdev=11829.78 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 3654], 5.00th=[ 6521], 10.00th=[ 7898], 20.00th=[ 9896], 00:10:50.509 | 30.00th=[10421], 40.00th=[10683], 50.00th=[13435], 60.00th=[19792], 00:10:50.509 | 70.00th=[20317], 80.00th=[21103], 90.00th=[28443], 95.00th=[36963], 00:10:50.509 | 99.00th=[66847], 99.50th=[74974], 99.90th=[82314], 99.95th=[82314], 00:10:50.509 | 99.99th=[82314] 00:10:50.509 bw ( KiB/s): min=12872, max=17840, per=21.96%, avg=15356.00, stdev=3512.91, samples=2 00:10:50.509 iops : min= 3218, max= 4460, avg=3839.00, stdev=878.23, samples=2 00:10:50.509 lat (msec) : 4=0.94%, 10=16.26%, 20=49.49%, 50=31.62%, 100=1.68% 00:10:50.509 cpu : usr=3.38%, sys=4.87%, ctx=483, majf=0, minf=1 00:10:50.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:50.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.509 issued rwts: total=3584,3967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.509 job1: (groupid=0, jobs=1): err= 0: pid=435657: Mon Oct 14 16:35:54 2024 00:10:50.509 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:10:50.509 slat (nsec): min=1126, max=27074k, avg=137577.76, stdev=911798.43 00:10:50.509 clat (usec): min=7555, max=55192, avg=17557.53, stdev=7650.94 00:10:50.509 lat (usec): min=7561, max=55218, avg=17695.11, stdev=7711.04 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 7701], 5.00th=[10159], 10.00th=[10814], 20.00th=[11863], 00:10:50.509 | 30.00th=[13829], 40.00th=[16319], 50.00th=[16909], 60.00th=[17433], 00:10:50.509 | 70.00th=[18744], 80.00th=[20055], 90.00th=[21627], 95.00th=[23987], 00:10:50.509 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[53216], 00:10:50.509 | 99.99th=[55313] 00:10:50.509 write: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1007msec); 0 zone resets 00:10:50.509 slat (nsec): min=1840, max=17221k, avg=151891.48, stdev=902126.07 00:10:50.509 clat (usec): min=1128, max=63277, avg=20891.29, stdev=11247.45 00:10:50.509 lat (usec): min=1138, max=63286, avg=21043.18, stdev=11326.76 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 5538], 5.00th=[ 7373], 10.00th=[ 8717], 20.00th=[ 9896], 00:10:50.509 | 30.00th=[14877], 40.00th=[18744], 50.00th=[20317], 60.00th=[20579], 00:10:50.509 | 70.00th=[22676], 80.00th=[26346], 90.00th=[35390], 95.00th=[41681], 00:10:50.509 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:10:50.509 | 99.99th=[63177] 00:10:50.509 bw ( KiB/s): min=11648, max=15416, per=19.35%, avg=13532.00, stdev=2664.38, samples=2 00:10:50.509 iops : min= 2912, max= 3854, avg=3383.00, stdev=666.09, samples=2 00:10:50.509 lat (msec) : 2=0.12%, 4=0.02%, 10=13.17%, 20=47.39%, 50=36.58% 00:10:50.509 lat (msec) : 100=2.72% 00:10:50.509 cpu : usr=2.49%, sys=3.58%, ctx=386, majf=0, minf=1 00:10:50.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:50.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.509 issued rwts: total=3072,3511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.509 job2: (groupid=0, jobs=1): err= 0: pid=435678: Mon Oct 14 16:35:54 2024 00:10:50.509 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:50.509 slat (nsec): min=1392, max=11393k, avg=99556.54, stdev=643110.03 00:10:50.509 clat (usec): min=5231, max=32633, avg=12415.53, stdev=2950.93 00:10:50.509 lat (usec): min=5241, max=32638, avg=12515.08, stdev=3005.78 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 6521], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[11076], 00:10:50.509 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:10:50.509 | 70.00th=[12518], 80.00th=[13304], 90.00th=[15533], 95.00th=[18220], 00:10:50.509 | 99.00th=[23987], 99.50th=[27919], 99.90th=[32637], 99.95th=[32637], 00:10:50.509 | 99.99th=[32637] 00:10:50.509 write: IOPS=4942, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1004msec); 0 zone resets 00:10:50.509 slat (usec): min=2, max=15456, avg=100.58, stdev=533.83 00:10:50.509 clat (usec): min=2785, max=41190, avg=14110.48, stdev=6218.19 00:10:50.509 lat (usec): min=2792, max=41202, avg=14211.06, stdev=6250.09 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 4686], 5.00th=[ 7701], 10.00th=[ 9503], 20.00th=[10945], 00:10:50.509 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:10:50.509 | 70.00th=[13173], 80.00th=[16450], 90.00th=[24511], 95.00th=[28967], 00:10:50.509 | 99.00th=[33162], 99.50th=[34341], 99.90th=[41157], 99.95th=[41157], 00:10:50.509 | 99.99th=[41157] 00:10:50.509 bw ( KiB/s): min=18032, max=20648, per=27.66%, avg=19340.00, stdev=1849.79, samples=2 00:10:50.509 iops : min= 4508, max= 5162, avg=4835.00, stdev=462.45, samples=2 00:10:50.509 lat (msec) : 4=0.19%, 10=11.42%, 20=78.99%, 50=9.40% 00:10:50.509 cpu : usr=3.79%, sys=6.08%, ctx=533, majf=0, minf=2 00:10:50.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:50.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.509 issued rwts: total=4608,4962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.509 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.509 job3: (groupid=0, jobs=1): err= 0: pid=435683: Mon Oct 14 16:35:54 2024 00:10:50.509 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:50.509 slat (nsec): min=1104, max=6033.7k, avg=94771.00, stdev=536435.99 00:10:50.509 clat (usec): min=4809, max=20165, avg=11941.89, stdev=1971.84 00:10:50.509 lat (usec): min=4816, max=21594, avg=12036.66, stdev=2010.82 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 6259], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[11207], 00:10:50.509 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:10:50.509 | 70.00th=[12387], 80.00th=[12911], 90.00th=[14222], 95.00th=[15270], 00:10:50.509 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19792], 99.95th=[20055], 00:10:50.509 | 99.99th=[20055] 00:10:50.509 write: IOPS=5141, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1004msec); 0 zone resets 00:10:50.509 slat (nsec): min=1857, max=21818k, avg=94516.07, stdev=574666.79 00:10:50.509 clat (usec): min=319, max=41552, avg=12683.76, stdev=3314.63 00:10:50.509 lat (usec): min=3190, max=41556, avg=12778.28, stdev=3333.70 00:10:50.509 clat percentiles (usec): 00:10:50.509 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[11469], 00:10:50.509 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:10:50.509 | 70.00th=[12256], 80.00th=[13042], 90.00th=[16319], 95.00th=[19530], 00:10:50.509 | 99.00th=[23462], 99.50th=[28181], 99.90th=[38011], 99.95th=[38011], 00:10:50.509 | 99.99th=[41681] 00:10:50.509 bw ( KiB/s): min=20480, max=20480, per=29.29%, avg=20480.00, stdev= 0.00, samples=2 00:10:50.509 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:50.509 lat (usec) : 500=0.01% 00:10:50.509 lat (msec) : 4=0.07%, 10=9.34%, 20=88.25%, 50=2.33% 00:10:50.509 cpu : usr=1.99%, sys=5.68%, ctx=551, majf=0, minf=1 00:10:50.509 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:50.509 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.509 issued rwts: total=5120,5162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.510 00:10:50.510 Run status group 0 (all jobs): 00:10:50.510 READ: bw=63.6MiB/s (66.6MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1004-1007msec 00:10:50.510 WRITE: bw=68.3MiB/s (71.6MB/s), 13.6MiB/s-20.1MiB/s (14.3MB/s-21.1MB/s), io=68.8MiB (72.1MB), run=1004-1007msec 00:10:50.510 00:10:50.510 Disk stats (read/write): 00:10:50.510 nvme0n1: ios=3108/3535, merge=0/0, ticks=37020/48901, in_queue=85921, util=96.09% 00:10:50.510 nvme0n2: ios=2588/3024, merge=0/0, ticks=22745/32582, in_queue=55327, util=95.93% 00:10:50.510 nvme0n3: ios=3882/4096, merge=0/0, ticks=34410/44101, in_queue=78511, util=88.87% 00:10:50.510 nvme0n4: ios=4137/4575, merge=0/0, ticks=17536/22884, in_queue=40420, util=96.33% 00:10:50.510 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:50.510 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=435781 00:10:50.510 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:50.510 16:35:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:50.510 [global] 00:10:50.510 thread=1 00:10:50.510 invalidate=1 00:10:50.510 rw=read 00:10:50.510 time_based=1 00:10:50.510 runtime=10 00:10:50.510 ioengine=libaio 00:10:50.510 direct=1 00:10:50.510 bs=4096 00:10:50.510 iodepth=1 00:10:50.510 norandommap=1 00:10:50.510 numjobs=1 00:10:50.510 00:10:50.510 [job0] 00:10:50.510 filename=/dev/nvme0n1 00:10:50.510 [job1] 00:10:50.510 filename=/dev/nvme0n2 00:10:50.510 [job2] 00:10:50.510 filename=/dev/nvme0n3 00:10:50.510 [job3] 00:10:50.510 filename=/dev/nvme0n4 00:10:50.510 Could not set queue depth (nvme0n1) 00:10:50.510 Could not set queue depth (nvme0n2) 00:10:50.510 Could not set queue depth (nvme0n3) 00:10:50.510 Could not set queue depth (nvme0n4) 00:10:50.766 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.766 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.766 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.766 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.766 fio-3.35 00:10:50.766 Starting 4 threads 00:10:54.040 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:54.040 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28221440, buflen=4096 00:10:54.041 fio: pid=436125, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:54.041 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:54.041 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1880064, buflen=4096 00:10:54.041 fio: pid=436124, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:54.041 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.041 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:54.041 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9478144, buflen=4096 00:10:54.041 fio: pid=436109, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:54.041 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.041 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:54.309 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=64098304, buflen=4096 00:10:54.309 fio: pid=436122, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:54.309 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.309 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:54.309 00:10:54.309 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=436109: Mon Oct 14 16:35:58 2024 00:10:54.309 read: IOPS=733, BW=2935KiB/s (3005kB/s)(9256KiB/3154msec) 00:10:54.309 slat (usec): min=6, max=29346, avg=40.15, stdev=807.31 00:10:54.309 clat (usec): min=158, max=41356, avg=1310.56, stdev=6545.33 00:10:54.309 lat (usec): min=166, max=41363, avg=1350.73, stdev=6593.06 00:10:54.309 clat percentiles (usec): 00:10:54.309 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 198], 00:10:54.309 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 231], 00:10:54.309 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 289], 00:10:54.309 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.309 | 99.99th=[41157] 00:10:54.309 bw ( KiB/s): min= 96, max=14003, per=8.12%, avg=2449.83, stdev=5660.32, samples=6 00:10:54.309 iops : min= 24, max= 3500, avg=612.33, stdev=1414.77, samples=6 00:10:54.309 lat (usec) : 250=73.09%, 500=23.97%, 750=0.17% 00:10:54.309 lat (msec) : 2=0.04%, 50=2.68% 00:10:54.309 cpu : usr=0.16%, sys=0.79%, ctx=2321, majf=0, minf=1 00:10:54.309 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.309 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.309 issued rwts: total=2315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.309 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.309 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=436122: Mon Oct 14 16:35:58 2024 00:10:54.309 read: IOPS=4661, BW=18.2MiB/s (19.1MB/s)(61.1MiB/3357msec) 00:10:54.309 slat (usec): min=6, max=15293, avg= 9.72, stdev=174.41 00:10:54.309 clat (usec): min=148, max=21902, avg=202.43, stdev=176.45 00:10:54.309 lat (usec): min=154, max=21909, avg=212.15, stdev=249.25 00:10:54.309 clat percentiles (usec): 00:10:54.309 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:10:54.309 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:54.309 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 249], 00:10:54.309 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 482], 00:10:54.309 | 99.99th=[ 2835] 00:10:54.309 bw ( KiB/s): min=15599, max=20160, per=62.09%, avg=18726.50, stdev=1641.55, samples=6 00:10:54.309 iops : min= 3899, max= 5040, avg=4681.50, stdev=410.67, samples=6 00:10:54.309 lat (usec) : 250=95.28%, 500=4.67%, 750=0.01%, 1000=0.01% 00:10:54.309 lat (msec) : 2=0.01%, 4=0.01%, 50=0.01% 00:10:54.309 cpu : usr=1.13%, sys=4.08%, ctx=15654, majf=0, minf=2 00:10:54.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.310 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.310 issued rwts: total=15650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.310 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=436124: Mon Oct 14 16:35:58 2024 00:10:54.310 read: IOPS=155, BW=622KiB/s (637kB/s)(1836KiB/2953msec) 00:10:54.310 slat (usec): min=5, max=13799, avg=39.89, stdev=642.98 00:10:54.310 clat (usec): min=194, max=42251, avg=6334.52, stdev=14518.63 00:10:54.310 lat (usec): min=207, max=42259, avg=6374.49, stdev=14524.19 00:10:54.310 clat percentiles (usec): 00:10:54.310 | 1.00th=[ 202], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 245], 00:10:54.310 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:10:54.310 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[41157], 95.00th=[41157], 00:10:54.310 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.310 | 99.99th=[42206] 00:10:54.310 bw ( KiB/s): min= 96, max= 336, per=0.79%, avg=238.40, stdev=114.32, samples=5 00:10:54.310 iops : min= 24, max= 84, avg=59.60, stdev=28.58, samples=5 00:10:54.310 lat (usec) : 250=28.26%, 500=55.43%, 750=0.87% 00:10:54.310 lat (msec) : 10=0.43%, 50=14.78% 00:10:54.310 cpu : usr=0.03%, sys=0.17%, ctx=461, majf=0, minf=1 00:10:54.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.310 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.310 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.310 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=436125: Mon Oct 14 16:35:58 2024 00:10:54.310 read: IOPS=2515, BW=9.83MiB/s (10.3MB/s)(26.9MiB/2739msec) 00:10:54.310 slat (nsec): min=6399, max=38315, avg=7321.09, stdev=1521.06 00:10:54.310 clat (usec): min=172, max=41603, avg=386.11, stdev=2550.47 00:10:54.310 lat (usec): min=180, max=41632, avg=393.43, stdev=2550.84 00:10:54.310 clat percentiles (usec): 00:10:54.310 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:54.310 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:10:54.310 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 273], 00:10:54.310 | 99.00th=[ 306], 99.50th=[ 441], 99.90th=[41157], 99.95th=[41157], 00:10:54.310 | 99.99th=[41681] 00:10:54.310 bw ( KiB/s): min= 128, max=17912, per=32.20%, avg=9713.60, stdev=8899.27, samples=5 00:10:54.310 iops : min= 32, max= 4478, avg=2428.40, stdev=2224.82, samples=5 00:10:54.310 lat (usec) : 250=84.79%, 500=14.73%, 750=0.07% 00:10:54.310 lat (msec) : 50=0.39% 00:10:54.310 cpu : usr=0.77%, sys=2.12%, ctx=6891, majf=0, minf=2 00:10:54.310 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.310 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.310 issued rwts: total=6891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.310 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.310 00:10:54.310 Run status group 0 (all jobs): 00:10:54.310 READ: bw=29.5MiB/s (30.9MB/s), 622KiB/s-18.2MiB/s (637kB/s-19.1MB/s), io=98.9MiB (104MB), run=2739-3357msec 00:10:54.310 00:10:54.310 Disk stats (read/write): 00:10:54.310 nvme0n1: ios=2215/0, merge=0/0, ticks=3871/0, in_queue=3871, util=97.75% 00:10:54.310 nvme0n2: ios=15650/0, merge=0/0, ticks=3070/0, in_queue=3070, util=94.97% 00:10:54.310 nvme0n3: ios=186/0, merge=0/0, ticks=2838/0, in_queue=2838, util=96.08% 00:10:54.310 nvme0n4: ios=6463/0, merge=0/0, ticks=2522/0, in_queue=2522, util=96.45% 00:10:54.567 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.567 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:54.824 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.824 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:54.824 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.824 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:55.082 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.082 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 435781 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:55.339 nvmf hotplug test: fio failed as expected 00:10:55.339 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.597 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:55.597 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:55.597 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:55.597 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:55.597 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:55.598 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:55.598 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:55.598 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.598 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:55.598 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.598 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.598 rmmod nvme_tcp 00:10:55.598 rmmod nvme_fabrics 00:10:55.598 rmmod nvme_keyring 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 433052 ']' 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 433052 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 433052 ']' 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 433052 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 433052 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 433052' 00:10:55.856 killing process with pid 433052 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 433052 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 433052 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:55.856 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.857 16:36:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.390 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:58.390 00:10:58.390 real 0m26.883s 00:10:58.390 user 1m47.020s 00:10:58.390 sys 0m8.650s 00:10:58.390 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.390 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.390 ************************************ 00:10:58.390 END TEST nvmf_fio_target 00:10:58.390 ************************************ 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.391 ************************************ 00:10:58.391 START TEST nvmf_bdevio 00:10:58.391 ************************************ 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:58.391 * Looking for test storage... 00:10:58.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:58.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.391 --rc genhtml_branch_coverage=1 00:10:58.391 --rc genhtml_function_coverage=1 00:10:58.391 --rc genhtml_legend=1 00:10:58.391 --rc geninfo_all_blocks=1 00:10:58.391 --rc geninfo_unexecuted_blocks=1 00:10:58.391 00:10:58.391 ' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:58.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.391 --rc genhtml_branch_coverage=1 00:10:58.391 --rc genhtml_function_coverage=1 00:10:58.391 --rc genhtml_legend=1 00:10:58.391 --rc geninfo_all_blocks=1 00:10:58.391 --rc geninfo_unexecuted_blocks=1 00:10:58.391 00:10:58.391 ' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:58.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.391 --rc genhtml_branch_coverage=1 00:10:58.391 --rc genhtml_function_coverage=1 00:10:58.391 --rc genhtml_legend=1 00:10:58.391 --rc geninfo_all_blocks=1 00:10:58.391 --rc geninfo_unexecuted_blocks=1 00:10:58.391 00:10:58.391 ' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:58.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.391 --rc genhtml_branch_coverage=1 00:10:58.391 --rc genhtml_function_coverage=1 00:10:58.391 --rc genhtml_legend=1 00:10:58.391 --rc geninfo_all_blocks=1 00:10:58.391 --rc geninfo_unexecuted_blocks=1 00:10:58.391 00:10:58.391 ' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:58.391 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.392 16:36:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.954 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.955 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.955 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:11:04.955 00:11:04.955 --- 10.0.0.2 ping statistics --- 00:11:04.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.955 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:11:04.955 00:11:04.955 --- 10.0.0.1 ping statistics --- 00:11:04.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.955 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=440660 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 440660 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 440660 ']' 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.955 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.955 [2024-10-14 16:36:08.859683] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:11:04.955 [2024-10-14 16:36:08.859730] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.955 [2024-10-14 16:36:08.933031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.955 [2024-10-14 16:36:08.974627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.955 [2024-10-14 16:36:08.974661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.955 [2024-10-14 16:36:08.974668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.955 [2024-10-14 16:36:08.974674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.955 [2024-10-14 16:36:08.974680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.955 [2024-10-14 16:36:08.976140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:04.955 [2024-10-14 16:36:08.976230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:04.955 [2024-10-14 16:36:08.976312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.955 [2024-10-14 16:36:08.976313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:04.955 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.955 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:04.955 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:04.955 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.955 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.955 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.956 [2024-10-14 16:36:09.124337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.956 Malloc0 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.956 [2024-10-14 16:36:09.198919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:04.956 { 00:11:04.956 "params": { 00:11:04.956 "name": "Nvme$subsystem", 00:11:04.956 "trtype": "$TEST_TRANSPORT", 00:11:04.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.956 "adrfam": "ipv4", 00:11:04.956 "trsvcid": "$NVMF_PORT", 00:11:04.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.956 "hdgst": ${hdgst:-false}, 00:11:04.956 "ddgst": ${ddgst:-false} 00:11:04.956 }, 00:11:04.956 "method": "bdev_nvme_attach_controller" 00:11:04.956 } 00:11:04.956 EOF 00:11:04.956 )") 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:04.956 16:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:04.956 "params": { 00:11:04.956 "name": "Nvme1", 00:11:04.956 "trtype": "tcp", 00:11:04.956 "traddr": "10.0.0.2", 00:11:04.956 "adrfam": "ipv4", 00:11:04.956 "trsvcid": "4420", 00:11:04.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.956 "hdgst": false, 00:11:04.956 "ddgst": false 00:11:04.956 }, 00:11:04.956 "method": "bdev_nvme_attach_controller" 00:11:04.956 }' 00:11:04.956 [2024-10-14 16:36:09.248730] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:11:04.956 [2024-10-14 16:36:09.248772] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440835 ] 00:11:04.956 [2024-10-14 16:36:09.316884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.956 [2024-10-14 16:36:09.360838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.956 [2024-10-14 16:36:09.360946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.956 [2024-10-14 16:36:09.360947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.212 I/O targets: 00:11:05.212 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:05.212 00:11:05.212 00:11:05.212 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.212 http://cunit.sourceforge.net/ 00:11:05.212 00:11:05.212 00:11:05.212 Suite: bdevio tests on: Nvme1n1 00:11:05.212 Test: blockdev write read block ...passed 00:11:05.212 Test: blockdev write zeroes read block ...passed 00:11:05.212 Test: blockdev write zeroes read no split ...passed 00:11:05.212 Test: blockdev write zeroes read split ...passed 00:11:05.212 Test: blockdev write zeroes read split partial ...passed 00:11:05.212 Test: blockdev reset ...[2024-10-14 16:36:09.792556] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:05.212 [2024-10-14 16:36:09.792625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f5400 (9): Bad file descriptor 00:11:05.212 [2024-10-14 16:36:09.846952] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:05.212 passed 00:11:05.212 Test: blockdev write read 8 blocks ...passed 00:11:05.212 Test: blockdev write read size > 128k ...passed 00:11:05.212 Test: blockdev write read invalid size ...passed 00:11:05.468 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:05.468 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:05.468 Test: blockdev write read max offset ...passed 00:11:05.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:05.468 Test: blockdev writev readv 8 blocks ...passed 00:11:05.468 Test: blockdev writev readv 30 x 1block ...passed 00:11:05.468 Test: blockdev writev readv block ...passed 00:11:05.468 Test: blockdev writev readv size > 128k ...passed 00:11:05.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:05.468 Test: blockdev comparev and writev ...[2024-10-14 16:36:10.016670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.468 [2024-10-14 16:36:10.016698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:05.468 [2024-10-14 16:36:10.016713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.468 [2024-10-14 16:36:10.016722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:05.468 [2024-10-14 16:36:10.016975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.468 [2024-10-14 16:36:10.016986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:05.468 [2024-10-14 16:36:10.016998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.468 [2024-10-14 16:36:10.017006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.017253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.469 [2024-10-14 16:36:10.017263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.017275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.469 [2024-10-14 16:36:10.017282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.017518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.469 [2024-10-14 16:36:10.017530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.017542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.469 [2024-10-14 16:36:10.017549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:05.469 passed 00:11:05.469 Test: blockdev nvme passthru rw ...passed 00:11:05.469 Test: blockdev nvme passthru vendor specific ...[2024-10-14 16:36:10.099959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.469 [2024-10-14 16:36:10.099985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.100108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.469 [2024-10-14 16:36:10.100118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.100224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.469 [2024-10-14 16:36:10.100234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:05.469 [2024-10-14 16:36:10.100335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.469 [2024-10-14 16:36:10.100345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:05.469 passed 00:11:05.726 Test: blockdev nvme admin passthru ...passed 00:11:05.726 Test: blockdev copy ...passed 00:11:05.726 00:11:05.726 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.726 suites 1 1 n/a 0 0 00:11:05.726 tests 23 23 23 0 0 00:11:05.726 asserts 152 152 152 0 n/a 00:11:05.726 00:11:05.726 Elapsed time = 1.038 seconds 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.726 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.726 rmmod nvme_tcp 00:11:05.726 rmmod nvme_fabrics 00:11:05.726 rmmod nvme_keyring 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 440660 ']' 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 440660 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 440660 ']' 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 440660 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 440660 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 440660' 00:11:05.983 killing process with pid 440660 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 440660 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 440660 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.983 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.512 00:11:08.512 real 0m10.083s 00:11:08.512 user 0m10.330s 00:11:08.512 sys 0m5.094s 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.512 ************************************ 00:11:08.512 END TEST nvmf_bdevio 00:11:08.512 ************************************ 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:08.512 00:11:08.512 real 4m35.470s 00:11:08.512 user 10m19.179s 00:11:08.512 sys 1m37.334s 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.512 ************************************ 00:11:08.512 END TEST nvmf_target_core 00:11:08.512 ************************************ 00:11:08.512 16:36:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:08.512 16:36:12 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.512 16:36:12 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.512 16:36:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.512 ************************************ 00:11:08.512 START TEST nvmf_target_extra 00:11:08.512 ************************************ 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:08.512 * Looking for test storage... 00:11:08.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.512 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.513 --rc genhtml_branch_coverage=1 00:11:08.513 --rc genhtml_function_coverage=1 00:11:08.513 --rc genhtml_legend=1 00:11:08.513 --rc geninfo_all_blocks=1 00:11:08.513 --rc geninfo_unexecuted_blocks=1 00:11:08.513 00:11:08.513 ' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.513 --rc genhtml_branch_coverage=1 00:11:08.513 --rc genhtml_function_coverage=1 00:11:08.513 --rc genhtml_legend=1 00:11:08.513 --rc geninfo_all_blocks=1 00:11:08.513 --rc geninfo_unexecuted_blocks=1 00:11:08.513 00:11:08.513 ' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.513 --rc genhtml_branch_coverage=1 00:11:08.513 --rc genhtml_function_coverage=1 00:11:08.513 --rc genhtml_legend=1 00:11:08.513 --rc geninfo_all_blocks=1 00:11:08.513 --rc geninfo_unexecuted_blocks=1 00:11:08.513 00:11:08.513 ' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.513 --rc genhtml_branch_coverage=1 00:11:08.513 --rc genhtml_function_coverage=1 00:11:08.513 --rc genhtml_legend=1 00:11:08.513 --rc geninfo_all_blocks=1 00:11:08.513 --rc geninfo_unexecuted_blocks=1 00:11:08.513 00:11:08.513 ' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.513 16:36:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.513 ************************************ 00:11:08.513 START TEST nvmf_example 00:11:08.513 ************************************ 00:11:08.513 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:08.513 * Looking for test storage... 00:11:08.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.513 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.513 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.513 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.772 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.773 --rc genhtml_branch_coverage=1 00:11:08.773 --rc genhtml_function_coverage=1 00:11:08.773 --rc genhtml_legend=1 00:11:08.773 --rc geninfo_all_blocks=1 00:11:08.773 --rc geninfo_unexecuted_blocks=1 00:11:08.773 00:11:08.773 ' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.773 --rc genhtml_branch_coverage=1 00:11:08.773 --rc genhtml_function_coverage=1 00:11:08.773 --rc genhtml_legend=1 00:11:08.773 --rc geninfo_all_blocks=1 00:11:08.773 --rc geninfo_unexecuted_blocks=1 00:11:08.773 00:11:08.773 ' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.773 --rc genhtml_branch_coverage=1 00:11:08.773 --rc genhtml_function_coverage=1 00:11:08.773 --rc genhtml_legend=1 00:11:08.773 --rc geninfo_all_blocks=1 00:11:08.773 --rc geninfo_unexecuted_blocks=1 00:11:08.773 00:11:08.773 ' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.773 --rc genhtml_branch_coverage=1 00:11:08.773 --rc genhtml_function_coverage=1 00:11:08.773 --rc genhtml_legend=1 00:11:08.773 --rc geninfo_all_blocks=1 00:11:08.773 --rc geninfo_unexecuted_blocks=1 00:11:08.773 00:11:08.773 ' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:08.773 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:08.774 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.774 16:36:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.351 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.351 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.352 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.352 16:36:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:11:15.352 00:11:15.352 --- 10.0.0.2 ping statistics --- 00:11:15.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.352 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:15.352 00:11:15.352 --- 10.0.0.1 ping statistics --- 00:11:15.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.352 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=444878 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 444878 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 444878 ']' 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.352 16:36:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.610 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.867 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.868 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:15.868 16:36:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:25.825 Initializing NVMe Controllers 00:11:25.825 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.825 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.825 Initialization complete. Launching workers. 00:11:25.825 ======================================================== 00:11:25.825 Latency(us) 00:11:25.825 Device Information : IOPS MiB/s Average min max 00:11:25.825 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18136.19 70.84 3528.27 681.80 15592.09 00:11:25.825 ======================================================== 00:11:25.825 Total : 18136.19 70.84 3528.27 681.80 15592.09 00:11:25.825 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.082 rmmod nvme_tcp 00:11:26.082 rmmod nvme_fabrics 00:11:26.082 rmmod nvme_keyring 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 444878 ']' 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 444878 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 444878 ']' 00:11:26.082 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 444878 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 444878 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 444878' 00:11:26.083 killing process with pid 444878 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 444878 00:11:26.083 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 444878 00:11:26.341 nvmf threads initialize successfully 00:11:26.341 bdev subsystem init successfully 00:11:26.341 created a nvmf target service 00:11:26.341 create targets's poll groups done 00:11:26.341 all subsystems of target started 00:11:26.341 nvmf target is running 00:11:26.341 all subsystems of target stopped 00:11:26.341 destroy targets's poll groups done 00:11:26.341 destroyed the nvmf target service 00:11:26.341 bdev subsystem finish successfully 00:11:26.341 nvmf threads destroy successfully 00:11:26.341 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:26.341 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:26.341 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:26.341 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.342 16:36:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.246 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.246 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:28.246 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.246 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.505 00:11:28.505 real 0m19.864s 00:11:28.505 user 0m45.987s 00:11:28.505 sys 0m6.055s 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.505 ************************************ 00:11:28.505 END TEST nvmf_example 00:11:28.505 ************************************ 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.505 ************************************ 00:11:28.505 START TEST nvmf_filesystem 00:11:28.505 ************************************ 00:11:28.505 16:36:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:28.505 * Looking for test storage... 00:11:28.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.505 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.506 --rc genhtml_branch_coverage=1 00:11:28.506 --rc genhtml_function_coverage=1 00:11:28.506 --rc genhtml_legend=1 00:11:28.506 --rc geninfo_all_blocks=1 00:11:28.506 --rc geninfo_unexecuted_blocks=1 00:11:28.506 00:11:28.506 ' 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.506 --rc genhtml_branch_coverage=1 00:11:28.506 --rc genhtml_function_coverage=1 00:11:28.506 --rc genhtml_legend=1 00:11:28.506 --rc geninfo_all_blocks=1 00:11:28.506 --rc geninfo_unexecuted_blocks=1 00:11:28.506 00:11:28.506 ' 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.506 --rc genhtml_branch_coverage=1 00:11:28.506 --rc genhtml_function_coverage=1 00:11:28.506 --rc genhtml_legend=1 00:11:28.506 --rc geninfo_all_blocks=1 00:11:28.506 --rc geninfo_unexecuted_blocks=1 00:11:28.506 00:11:28.506 ' 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.506 --rc genhtml_branch_coverage=1 00:11:28.506 --rc genhtml_function_coverage=1 00:11:28.506 --rc genhtml_legend=1 00:11:28.506 --rc geninfo_all_blocks=1 00:11:28.506 --rc geninfo_unexecuted_blocks=1 00:11:28.506 00:11:28.506 ' 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:28.506 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:28.769 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:28.770 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:28.770 #define SPDK_CONFIG_H 00:11:28.770 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:28.770 #define SPDK_CONFIG_APPS 1 00:11:28.770 #define SPDK_CONFIG_ARCH native 00:11:28.770 #undef SPDK_CONFIG_ASAN 00:11:28.770 #undef SPDK_CONFIG_AVAHI 00:11:28.770 #undef SPDK_CONFIG_CET 00:11:28.770 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:28.770 #define SPDK_CONFIG_COVERAGE 1 00:11:28.770 #define SPDK_CONFIG_CROSS_PREFIX 00:11:28.770 #undef SPDK_CONFIG_CRYPTO 00:11:28.770 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:28.770 #undef SPDK_CONFIG_CUSTOMOCF 00:11:28.770 #undef SPDK_CONFIG_DAOS 00:11:28.770 #define SPDK_CONFIG_DAOS_DIR 00:11:28.770 #define SPDK_CONFIG_DEBUG 1 00:11:28.770 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:28.770 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:28.770 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:28.770 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:28.770 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:28.770 #undef SPDK_CONFIG_DPDK_UADK 00:11:28.770 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:28.770 #define SPDK_CONFIG_EXAMPLES 1 00:11:28.770 #undef SPDK_CONFIG_FC 00:11:28.770 #define SPDK_CONFIG_FC_PATH 00:11:28.770 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:28.770 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:28.770 #define SPDK_CONFIG_FSDEV 1 00:11:28.770 #undef SPDK_CONFIG_FUSE 00:11:28.770 #undef SPDK_CONFIG_FUZZER 00:11:28.770 #define SPDK_CONFIG_FUZZER_LIB 00:11:28.770 #undef SPDK_CONFIG_GOLANG 00:11:28.770 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:28.770 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:28.770 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:28.770 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:28.770 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:28.770 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:28.770 #undef SPDK_CONFIG_HAVE_LZ4 00:11:28.770 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:28.770 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:28.770 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:28.770 #define SPDK_CONFIG_IDXD 1 00:11:28.770 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:28.770 #undef SPDK_CONFIG_IPSEC_MB 00:11:28.770 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:28.770 #define SPDK_CONFIG_ISAL 1 00:11:28.770 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:28.770 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:28.770 #define SPDK_CONFIG_LIBDIR 00:11:28.770 #undef SPDK_CONFIG_LTO 00:11:28.770 #define SPDK_CONFIG_MAX_LCORES 128 00:11:28.770 #define SPDK_CONFIG_NVME_CUSE 1 00:11:28.770 #undef SPDK_CONFIG_OCF 00:11:28.770 #define SPDK_CONFIG_OCF_PATH 00:11:28.770 #define SPDK_CONFIG_OPENSSL_PATH 00:11:28.770 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:28.770 #define SPDK_CONFIG_PGO_DIR 00:11:28.770 #undef SPDK_CONFIG_PGO_USE 00:11:28.771 #define SPDK_CONFIG_PREFIX /usr/local 00:11:28.771 #undef SPDK_CONFIG_RAID5F 00:11:28.771 #undef SPDK_CONFIG_RBD 00:11:28.771 #define SPDK_CONFIG_RDMA 1 00:11:28.771 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:28.771 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:28.771 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:28.771 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:28.771 #define SPDK_CONFIG_SHARED 1 00:11:28.771 #undef SPDK_CONFIG_SMA 00:11:28.771 #define SPDK_CONFIG_TESTS 1 00:11:28.771 #undef SPDK_CONFIG_TSAN 00:11:28.771 #define SPDK_CONFIG_UBLK 1 00:11:28.771 #define SPDK_CONFIG_UBSAN 1 00:11:28.771 #undef SPDK_CONFIG_UNIT_TESTS 00:11:28.771 #undef SPDK_CONFIG_URING 00:11:28.771 #define SPDK_CONFIG_URING_PATH 00:11:28.771 #undef SPDK_CONFIG_URING_ZNS 00:11:28.771 #undef SPDK_CONFIG_USDT 00:11:28.771 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:28.771 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:28.771 #define SPDK_CONFIG_VFIO_USER 1 00:11:28.771 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:28.771 #define SPDK_CONFIG_VHOST 1 00:11:28.771 #define SPDK_CONFIG_VIRTIO 1 00:11:28.771 #undef SPDK_CONFIG_VTUNE 00:11:28.771 #define SPDK_CONFIG_VTUNE_DIR 00:11:28.771 #define SPDK_CONFIG_WERROR 1 00:11:28.771 #define SPDK_CONFIG_WPDK_DIR 00:11:28.771 #undef SPDK_CONFIG_XNVME 00:11:28.771 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:28.771 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.772 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 447141 ]] 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 447141 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.kP6PiX 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.kP6PiX/tests/target /tmp/spdk.kP6PiX 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=606707712 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677722112 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189395214336 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963949056 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6568734720 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.773 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971941376 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981972480 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981534208 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981976576 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=442368 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:28.774 * Looking for test storage... 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189395214336 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8783327232 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.774 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.774 --rc genhtml_branch_coverage=1 00:11:28.774 --rc genhtml_function_coverage=1 00:11:28.774 --rc genhtml_legend=1 00:11:28.774 --rc geninfo_all_blocks=1 00:11:28.774 --rc geninfo_unexecuted_blocks=1 00:11:28.774 00:11:28.775 ' 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.775 --rc genhtml_branch_coverage=1 00:11:28.775 --rc genhtml_function_coverage=1 00:11:28.775 --rc genhtml_legend=1 00:11:28.775 --rc geninfo_all_blocks=1 00:11:28.775 --rc geninfo_unexecuted_blocks=1 00:11:28.775 00:11:28.775 ' 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.775 --rc genhtml_branch_coverage=1 00:11:28.775 --rc genhtml_function_coverage=1 00:11:28.775 --rc genhtml_legend=1 00:11:28.775 --rc geninfo_all_blocks=1 00:11:28.775 --rc geninfo_unexecuted_blocks=1 00:11:28.775 00:11:28.775 ' 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.775 --rc genhtml_branch_coverage=1 00:11:28.775 --rc genhtml_function_coverage=1 00:11:28.775 --rc genhtml_legend=1 00:11:28.775 --rc geninfo_all_blocks=1 00:11:28.775 --rc geninfo_unexecuted_blocks=1 00:11:28.775 00:11:28.775 ' 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.775 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.034 16:36:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.605 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.605 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.605 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:11:35.606 00:11:35.606 --- 10.0.0.2 ping statistics --- 00:11:35.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.606 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:35.606 00:11:35.606 --- 10.0.0.1 ping statistics --- 00:11:35.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.606 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 ************************************ 00:11:35.606 START TEST nvmf_filesystem_no_in_capsule 00:11:35.606 ************************************ 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=450390 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 450390 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 450390 ']' 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 [2024-10-14 16:36:39.530945] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:11:35.606 [2024-10-14 16:36:39.530983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.606 [2024-10-14 16:36:39.603203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.606 [2024-10-14 16:36:39.645058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.606 [2024-10-14 16:36:39.645095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.606 [2024-10-14 16:36:39.645102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.606 [2024-10-14 16:36:39.645107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.606 [2024-10-14 16:36:39.645113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.606 [2024-10-14 16:36:39.646667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.606 [2024-10-14 16:36:39.646776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.606 [2024-10-14 16:36:39.646881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.606 [2024-10-14 16:36:39.646882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 [2024-10-14 16:36:39.783090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 Malloc1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.606 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.607 [2024-10-14 16:36:39.934092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:35.607 { 00:11:35.607 "name": "Malloc1", 00:11:35.607 "aliases": [ 00:11:35.607 "9916959f-f0db-465c-bea8-5eff9e2cb44b" 00:11:35.607 ], 00:11:35.607 "product_name": "Malloc disk", 00:11:35.607 "block_size": 512, 00:11:35.607 "num_blocks": 1048576, 00:11:35.607 "uuid": "9916959f-f0db-465c-bea8-5eff9e2cb44b", 00:11:35.607 "assigned_rate_limits": { 00:11:35.607 "rw_ios_per_sec": 0, 00:11:35.607 "rw_mbytes_per_sec": 0, 00:11:35.607 "r_mbytes_per_sec": 0, 00:11:35.607 "w_mbytes_per_sec": 0 00:11:35.607 }, 00:11:35.607 "claimed": true, 00:11:35.607 "claim_type": "exclusive_write", 00:11:35.607 "zoned": false, 00:11:35.607 "supported_io_types": { 00:11:35.607 "read": true, 00:11:35.607 "write": true, 00:11:35.607 "unmap": true, 00:11:35.607 "flush": true, 00:11:35.607 "reset": true, 00:11:35.607 "nvme_admin": false, 00:11:35.607 "nvme_io": false, 00:11:35.607 "nvme_io_md": false, 00:11:35.607 "write_zeroes": true, 00:11:35.607 "zcopy": true, 00:11:35.607 "get_zone_info": false, 00:11:35.607 "zone_management": false, 00:11:35.607 "zone_append": false, 00:11:35.607 "compare": false, 00:11:35.607 "compare_and_write": false, 00:11:35.607 "abort": true, 00:11:35.607 "seek_hole": false, 00:11:35.607 "seek_data": false, 00:11:35.607 "copy": true, 00:11:35.607 "nvme_iov_md": false 00:11:35.607 }, 00:11:35.607 "memory_domains": [ 00:11:35.607 { 00:11:35.607 "dma_device_id": "system", 00:11:35.607 "dma_device_type": 1 00:11:35.607 }, 00:11:35.607 { 00:11:35.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.607 "dma_device_type": 2 00:11:35.607 } 00:11:35.607 ], 00:11:35.607 "driver_specific": {} 00:11:35.607 } 00:11:35.607 ]' 00:11:35.607 16:36:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.607 16:36:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.542 16:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.542 16:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:36.542 16:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.542 16:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:36.542 16:36:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:39.099 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:39.545 16:36:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:40.544 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:40.544 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:40.544 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.544 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.544 16:36:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.544 ************************************ 00:11:40.544 START TEST filesystem_ext4 00:11:40.544 ************************************ 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:40.544 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:40.544 mke2fs 1.47.0 (5-Feb-2023) 00:11:40.544 Discarding device blocks: 0/522240 done 00:11:40.544 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:40.544 Filesystem UUID: e9ce6709-516a-4c31-96f7-df0a5371af58 00:11:40.544 Superblock backups stored on blocks: 00:11:40.544 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:40.544 00:11:40.544 Allocating group tables: 0/64 done 00:11:40.544 Writing inode tables: 0/64 done 00:11:40.802 Creating journal (8192 blocks): done 00:11:40.802 Writing superblocks and filesystem accounting information: 0/64 done 00:11:40.802 00:11:40.802 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:40.802 16:36:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 450390 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.057 00:11:46.057 real 0m5.596s 00:11:46.057 user 0m0.021s 00:11:46.057 sys 0m0.075s 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:46.057 ************************************ 00:11:46.057 END TEST filesystem_ext4 00:11:46.057 ************************************ 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.057 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.315 ************************************ 00:11:46.315 START TEST filesystem_btrfs 00:11:46.315 ************************************ 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:46.315 btrfs-progs v6.8.1 00:11:46.315 See https://btrfs.readthedocs.io for more information. 00:11:46.315 00:11:46.315 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:46.315 NOTE: several default settings have changed in version 5.15, please make sure 00:11:46.315 this does not affect your deployments: 00:11:46.315 - DUP for metadata (-m dup) 00:11:46.315 - enabled no-holes (-O no-holes) 00:11:46.315 - enabled free-space-tree (-R free-space-tree) 00:11:46.315 00:11:46.315 Label: (null) 00:11:46.315 UUID: d36341eb-c249-4814-8880-f29016b93373 00:11:46.315 Node size: 16384 00:11:46.315 Sector size: 4096 (CPU page size: 4096) 00:11:46.315 Filesystem size: 510.00MiB 00:11:46.315 Block group profiles: 00:11:46.315 Data: single 8.00MiB 00:11:46.315 Metadata: DUP 32.00MiB 00:11:46.315 System: DUP 8.00MiB 00:11:46.315 SSD detected: yes 00:11:46.315 Zoned device: no 00:11:46.315 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:46.315 Checksum: crc32c 00:11:46.315 Number of devices: 1 00:11:46.315 Devices: 00:11:46.315 ID SIZE PATH 00:11:46.315 1 510.00MiB /dev/nvme0n1p1 00:11:46.315 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:46.315 16:36:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 450390 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.572 00:11:46.572 real 0m0.500s 00:11:46.572 user 0m0.022s 00:11:46.572 sys 0m0.116s 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.572 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.572 ************************************ 00:11:46.572 END TEST filesystem_btrfs 00:11:46.572 ************************************ 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.830 ************************************ 00:11:46.830 START TEST filesystem_xfs 00:11:46.830 ************************************ 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:46.830 16:36:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.830 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.830 = sectsz=512 attr=2, projid32bit=1 00:11:46.830 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.830 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.830 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.830 = sunit=0 swidth=0 blks 00:11:46.830 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.830 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.830 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.830 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:47.762 Discarding blocks...Done. 00:11:47.762 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:47.762 16:36:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 450390 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.658 00:11:49.658 real 0m2.690s 00:11:49.658 user 0m0.029s 00:11:49.658 sys 0m0.064s 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.658 ************************************ 00:11:49.658 END TEST filesystem_xfs 00:11:49.658 ************************************ 00:11:49.658 16:36:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 450390 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 450390 ']' 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 450390 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 450390 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 450390' 00:11:49.917 killing process with pid 450390 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 450390 00:11:49.917 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 450390 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:50.176 00:11:50.176 real 0m15.296s 00:11:50.176 user 1m0.141s 00:11:50.176 sys 0m1.334s 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.176 ************************************ 00:11:50.176 END TEST nvmf_filesystem_no_in_capsule 00:11:50.176 ************************************ 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.176 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.434 ************************************ 00:11:50.434 START TEST nvmf_filesystem_in_capsule 00:11:50.434 ************************************ 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=453157 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 453157 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 453157 ']' 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.434 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.435 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.435 16:36:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.435 [2024-10-14 16:36:54.903253] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:11:50.435 [2024-10-14 16:36:54.903296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.435 [2024-10-14 16:36:54.978086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.435 [2024-10-14 16:36:55.018032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.435 [2024-10-14 16:36:55.018069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.435 [2024-10-14 16:36:55.018079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.435 [2024-10-14 16:36:55.018085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.435 [2024-10-14 16:36:55.018090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.435 [2024-10-14 16:36:55.019715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.435 [2024-10-14 16:36:55.019826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.435 [2024-10-14 16:36:55.019934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.435 [2024-10-14 16:36:55.019934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 [2024-10-14 16:36:55.165009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.693 Malloc1 00:11:50.693 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.694 [2024-10-14 16:36:55.307185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.694 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.952 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.952 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:50.952 { 00:11:50.952 "name": "Malloc1", 00:11:50.952 "aliases": [ 00:11:50.952 "f5454f27-1751-4da2-bb47-575ddac3ba3a" 00:11:50.952 ], 00:11:50.952 "product_name": "Malloc disk", 00:11:50.952 "block_size": 512, 00:11:50.952 "num_blocks": 1048576, 00:11:50.952 "uuid": "f5454f27-1751-4da2-bb47-575ddac3ba3a", 00:11:50.952 "assigned_rate_limits": { 00:11:50.952 "rw_ios_per_sec": 0, 00:11:50.952 "rw_mbytes_per_sec": 0, 00:11:50.952 "r_mbytes_per_sec": 0, 00:11:50.952 "w_mbytes_per_sec": 0 00:11:50.953 }, 00:11:50.953 "claimed": true, 00:11:50.953 "claim_type": "exclusive_write", 00:11:50.953 "zoned": false, 00:11:50.953 "supported_io_types": { 00:11:50.953 "read": true, 00:11:50.953 "write": true, 00:11:50.953 "unmap": true, 00:11:50.953 "flush": true, 00:11:50.953 "reset": true, 00:11:50.953 "nvme_admin": false, 00:11:50.953 "nvme_io": false, 00:11:50.953 "nvme_io_md": false, 00:11:50.953 "write_zeroes": true, 00:11:50.953 "zcopy": true, 00:11:50.953 "get_zone_info": false, 00:11:50.953 "zone_management": false, 00:11:50.953 "zone_append": false, 00:11:50.953 "compare": false, 00:11:50.953 "compare_and_write": false, 00:11:50.953 "abort": true, 00:11:50.953 "seek_hole": false, 00:11:50.953 "seek_data": false, 00:11:50.953 "copy": true, 00:11:50.953 "nvme_iov_md": false 00:11:50.953 }, 00:11:50.953 "memory_domains": [ 00:11:50.953 { 00:11:50.953 "dma_device_id": "system", 00:11:50.953 "dma_device_type": 1 00:11:50.953 }, 00:11:50.953 { 00:11:50.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.953 "dma_device_type": 2 00:11:50.953 } 00:11:50.953 ], 00:11:50.953 "driver_specific": {} 00:11:50.953 } 00:11:50.953 ]' 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:50.953 16:36:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.327 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.327 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:52.327 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.327 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:52.327 16:36:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:54.225 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:54.483 16:36:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:55.049 16:36:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:55.982 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:55.982 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.983 ************************************ 00:11:55.983 START TEST filesystem_in_capsule_ext4 00:11:55.983 ************************************ 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:55.983 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:55.983 mke2fs 1.47.0 (5-Feb-2023) 00:11:55.983 Discarding device blocks: 0/522240 done 00:11:55.983 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:55.983 Filesystem UUID: 406918fd-1d36-4339-8a53-f910b9b5e56e 00:11:55.983 Superblock backups stored on blocks: 00:11:55.983 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:55.983 00:11:55.983 Allocating group tables: 0/64 done 00:11:55.983 Writing inode tables: 0/64 done 00:11:56.241 Creating journal (8192 blocks): done 00:11:56.241 Writing superblocks and filesystem accounting information: 0/64 done 00:11:56.241 00:11:56.241 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:56.241 16:37:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.806 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.806 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:02.806 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.806 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:02.806 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:02.806 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 453157 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.807 00:12:02.807 real 0m6.189s 00:12:02.807 user 0m0.027s 00:12:02.807 sys 0m0.065s 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:02.807 ************************************ 00:12:02.807 END TEST filesystem_in_capsule_ext4 00:12:02.807 ************************************ 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.807 ************************************ 00:12:02.807 START TEST filesystem_in_capsule_btrfs 00:12:02.807 ************************************ 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:02.807 btrfs-progs v6.8.1 00:12:02.807 See https://btrfs.readthedocs.io for more information. 00:12:02.807 00:12:02.807 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:02.807 NOTE: several default settings have changed in version 5.15, please make sure 00:12:02.807 this does not affect your deployments: 00:12:02.807 - DUP for metadata (-m dup) 00:12:02.807 - enabled no-holes (-O no-holes) 00:12:02.807 - enabled free-space-tree (-R free-space-tree) 00:12:02.807 00:12:02.807 Label: (null) 00:12:02.807 UUID: 0be8f26a-10e6-402b-b875-8e19f0dfc38a 00:12:02.807 Node size: 16384 00:12:02.807 Sector size: 4096 (CPU page size: 4096) 00:12:02.807 Filesystem size: 510.00MiB 00:12:02.807 Block group profiles: 00:12:02.807 Data: single 8.00MiB 00:12:02.807 Metadata: DUP 32.00MiB 00:12:02.807 System: DUP 8.00MiB 00:12:02.807 SSD detected: yes 00:12:02.807 Zoned device: no 00:12:02.807 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:02.807 Checksum: crc32c 00:12:02.807 Number of devices: 1 00:12:02.807 Devices: 00:12:02.807 ID SIZE PATH 00:12:02.807 1 510.00MiB /dev/nvme0n1p1 00:12:02.807 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:02.807 16:37:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 453157 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.807 00:12:02.807 real 0m0.449s 00:12:02.807 user 0m0.030s 00:12:02.807 sys 0m0.110s 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.807 ************************************ 00:12:02.807 END TEST filesystem_in_capsule_btrfs 00:12:02.807 ************************************ 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.807 ************************************ 00:12:02.807 START TEST filesystem_in_capsule_xfs 00:12:02.807 ************************************ 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:02.807 16:37:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:02.807 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:02.807 = sectsz=512 attr=2, projid32bit=1 00:12:02.807 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:02.807 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:02.807 data = bsize=4096 blocks=130560, imaxpct=25 00:12:02.807 = sunit=0 swidth=0 blks 00:12:02.807 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:02.807 log =internal log bsize=4096 blocks=16384, version=2 00:12:02.807 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:02.807 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:03.740 Discarding blocks...Done. 00:12:03.740 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:03.740 16:37:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 453157 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.267 00:12:06.267 real 0m3.417s 00:12:06.267 user 0m0.021s 00:12:06.267 sys 0m0.077s 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.267 ************************************ 00:12:06.267 END TEST filesystem_in_capsule_xfs 00:12:06.267 ************************************ 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.267 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 453157 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 453157 ']' 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 453157 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.525 16:37:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 453157 00:12:06.525 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.525 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.525 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 453157' 00:12:06.525 killing process with pid 453157 00:12:06.525 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 453157 00:12:06.525 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 453157 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:06.783 00:12:06.783 real 0m16.488s 00:12:06.783 user 1m4.836s 00:12:06.783 sys 0m1.408s 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.783 ************************************ 00:12:06.783 END TEST nvmf_filesystem_in_capsule 00:12:06.783 ************************************ 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.783 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.783 rmmod nvme_tcp 00:12:06.783 rmmod nvme_fabrics 00:12:06.783 rmmod nvme_keyring 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.041 16:37:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.945 00:12:08.945 real 0m40.549s 00:12:08.945 user 2m7.019s 00:12:08.945 sys 0m7.477s 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:08.945 ************************************ 00:12:08.945 END TEST nvmf_filesystem 00:12:08.945 ************************************ 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.945 16:37:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.204 ************************************ 00:12:09.204 START TEST nvmf_target_discovery 00:12:09.204 ************************************ 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:09.204 * Looking for test storage... 00:12:09.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.204 --rc genhtml_branch_coverage=1 00:12:09.204 --rc genhtml_function_coverage=1 00:12:09.204 --rc genhtml_legend=1 00:12:09.204 --rc geninfo_all_blocks=1 00:12:09.204 --rc geninfo_unexecuted_blocks=1 00:12:09.204 00:12:09.204 ' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.204 --rc genhtml_branch_coverage=1 00:12:09.204 --rc genhtml_function_coverage=1 00:12:09.204 --rc genhtml_legend=1 00:12:09.204 --rc geninfo_all_blocks=1 00:12:09.204 --rc geninfo_unexecuted_blocks=1 00:12:09.204 00:12:09.204 ' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.204 --rc genhtml_branch_coverage=1 00:12:09.204 --rc genhtml_function_coverage=1 00:12:09.204 --rc genhtml_legend=1 00:12:09.204 --rc geninfo_all_blocks=1 00:12:09.204 --rc geninfo_unexecuted_blocks=1 00:12:09.204 00:12:09.204 ' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:09.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.204 --rc genhtml_branch_coverage=1 00:12:09.204 --rc genhtml_function_coverage=1 00:12:09.204 --rc genhtml_legend=1 00:12:09.204 --rc geninfo_all_blocks=1 00:12:09.204 --rc geninfo_unexecuted_blocks=1 00:12:09.204 00:12:09.204 ' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.204 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:09.205 16:37:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:15.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:15.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:15.783 Found net devices under 0000:86:00.0: cvl_0_0 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:15.783 Found net devices under 0000:86:00.1: cvl_0_1 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.783 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:12:15.784 00:12:15.784 --- 10.0.0.2 ping statistics --- 00:12:15.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.784 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:12:15.784 00:12:15.784 --- 10.0.0.1 ping statistics --- 00:12:15.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.784 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=459560 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 459560 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 459560 ']' 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.784 16:37:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 [2024-10-14 16:37:19.822242] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:12:15.784 [2024-10-14 16:37:19.822287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.784 [2024-10-14 16:37:19.894936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.784 [2024-10-14 16:37:19.939531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.784 [2024-10-14 16:37:19.939566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.784 [2024-10-14 16:37:19.939573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.784 [2024-10-14 16:37:19.939579] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.784 [2024-10-14 16:37:19.939584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.784 [2024-10-14 16:37:19.941158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.784 [2024-10-14 16:37:19.941192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.784 [2024-10-14 16:37:19.941300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.784 [2024-10-14 16:37:19.941300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 [2024-10-14 16:37:20.078973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 Null1 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 [2024-10-14 16:37:20.124328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 Null2 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 Null3 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.784 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 Null4 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.785 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:16.044 00:12:16.044 Discovery Log Number of Records 6, Generation counter 6 00:12:16.044 =====Discovery Log Entry 0====== 00:12:16.044 trtype: tcp 00:12:16.044 adrfam: ipv4 00:12:16.044 subtype: current discovery subsystem 00:12:16.044 treq: not required 00:12:16.044 portid: 0 00:12:16.044 trsvcid: 4420 00:12:16.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.044 traddr: 10.0.0.2 00:12:16.044 eflags: explicit discovery connections, duplicate discovery information 00:12:16.044 sectype: none 00:12:16.044 =====Discovery Log Entry 1====== 00:12:16.044 trtype: tcp 00:12:16.044 adrfam: ipv4 00:12:16.044 subtype: nvme subsystem 00:12:16.044 treq: not required 00:12:16.044 portid: 0 00:12:16.044 trsvcid: 4420 00:12:16.044 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:16.044 traddr: 10.0.0.2 00:12:16.044 eflags: none 00:12:16.044 sectype: none 00:12:16.044 =====Discovery Log Entry 2====== 00:12:16.044 trtype: tcp 00:12:16.044 adrfam: ipv4 00:12:16.044 subtype: nvme subsystem 00:12:16.044 treq: not required 00:12:16.044 portid: 0 00:12:16.044 trsvcid: 4420 00:12:16.044 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:16.044 traddr: 10.0.0.2 00:12:16.044 eflags: none 00:12:16.044 sectype: none 00:12:16.044 =====Discovery Log Entry 3====== 00:12:16.044 trtype: tcp 00:12:16.044 adrfam: ipv4 00:12:16.044 subtype: nvme subsystem 00:12:16.044 treq: not required 00:12:16.044 portid: 0 00:12:16.044 trsvcid: 4420 00:12:16.044 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:16.044 traddr: 10.0.0.2 00:12:16.044 eflags: none 00:12:16.044 sectype: none 00:12:16.044 =====Discovery Log Entry 4====== 00:12:16.044 trtype: tcp 00:12:16.044 adrfam: ipv4 00:12:16.044 subtype: nvme subsystem 00:12:16.044 treq: not required 00:12:16.044 portid: 0 00:12:16.044 trsvcid: 4420 00:12:16.044 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:16.044 traddr: 10.0.0.2 00:12:16.044 eflags: none 00:12:16.044 sectype: none 00:12:16.044 =====Discovery Log Entry 5====== 00:12:16.044 trtype: tcp 00:12:16.044 adrfam: ipv4 00:12:16.044 subtype: discovery subsystem referral 00:12:16.044 treq: not required 00:12:16.044 portid: 0 00:12:16.044 trsvcid: 4430 00:12:16.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.044 traddr: 10.0.0.2 00:12:16.044 eflags: none 00:12:16.044 sectype: none 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:16.044 Perform nvmf subsystem discovery via RPC 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 [ 00:12:16.044 { 00:12:16.044 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:16.044 "subtype": "Discovery", 00:12:16.044 "listen_addresses": [ 00:12:16.044 { 00:12:16.044 "trtype": "TCP", 00:12:16.044 "adrfam": "IPv4", 00:12:16.044 "traddr": "10.0.0.2", 00:12:16.044 "trsvcid": "4420" 00:12:16.044 } 00:12:16.044 ], 00:12:16.044 "allow_any_host": true, 00:12:16.044 "hosts": [] 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.044 "subtype": "NVMe", 00:12:16.044 "listen_addresses": [ 00:12:16.044 { 00:12:16.044 "trtype": "TCP", 00:12:16.044 "adrfam": "IPv4", 00:12:16.044 "traddr": "10.0.0.2", 00:12:16.044 "trsvcid": "4420" 00:12:16.044 } 00:12:16.044 ], 00:12:16.044 "allow_any_host": true, 00:12:16.044 "hosts": [], 00:12:16.044 "serial_number": "SPDK00000000000001", 00:12:16.044 "model_number": "SPDK bdev Controller", 00:12:16.044 "max_namespaces": 32, 00:12:16.044 "min_cntlid": 1, 00:12:16.044 "max_cntlid": 65519, 00:12:16.044 "namespaces": [ 00:12:16.044 { 00:12:16.044 "nsid": 1, 00:12:16.044 "bdev_name": "Null1", 00:12:16.044 "name": "Null1", 00:12:16.044 "nguid": "EB8F4BEDF9AF4EDCB7FEB3BDAEE21426", 00:12:16.044 "uuid": "eb8f4bed-f9af-4edc-b7fe-b3bdaee21426" 00:12:16.044 } 00:12:16.044 ] 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.044 "subtype": "NVMe", 00:12:16.044 "listen_addresses": [ 00:12:16.044 { 00:12:16.044 "trtype": "TCP", 00:12:16.044 "adrfam": "IPv4", 00:12:16.044 "traddr": "10.0.0.2", 00:12:16.044 "trsvcid": "4420" 00:12:16.044 } 00:12:16.044 ], 00:12:16.044 "allow_any_host": true, 00:12:16.044 "hosts": [], 00:12:16.044 "serial_number": "SPDK00000000000002", 00:12:16.044 "model_number": "SPDK bdev Controller", 00:12:16.044 "max_namespaces": 32, 00:12:16.044 "min_cntlid": 1, 00:12:16.044 "max_cntlid": 65519, 00:12:16.044 "namespaces": [ 00:12:16.044 { 00:12:16.044 "nsid": 1, 00:12:16.044 "bdev_name": "Null2", 00:12:16.044 "name": "Null2", 00:12:16.044 "nguid": "940A3B2E4AD7451393DAEABC55B15528", 00:12:16.044 "uuid": "940a3b2e-4ad7-4513-93da-eabc55b15528" 00:12:16.044 } 00:12:16.044 ] 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:16.044 "subtype": "NVMe", 00:12:16.044 "listen_addresses": [ 00:12:16.044 { 00:12:16.044 "trtype": "TCP", 00:12:16.044 "adrfam": "IPv4", 00:12:16.044 "traddr": "10.0.0.2", 00:12:16.044 "trsvcid": "4420" 00:12:16.044 } 00:12:16.044 ], 00:12:16.044 "allow_any_host": true, 00:12:16.044 "hosts": [], 00:12:16.044 "serial_number": "SPDK00000000000003", 00:12:16.044 "model_number": "SPDK bdev Controller", 00:12:16.044 "max_namespaces": 32, 00:12:16.044 "min_cntlid": 1, 00:12:16.044 "max_cntlid": 65519, 00:12:16.044 "namespaces": [ 00:12:16.044 { 00:12:16.044 "nsid": 1, 00:12:16.044 "bdev_name": "Null3", 00:12:16.044 "name": "Null3", 00:12:16.044 "nguid": "799F85C290E14603BD3CE275E8689F32", 00:12:16.044 "uuid": "799f85c2-90e1-4603-bd3c-e275e8689f32" 00:12:16.044 } 00:12:16.044 ] 00:12:16.044 }, 00:12:16.044 { 00:12:16.044 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:16.044 "subtype": "NVMe", 00:12:16.044 "listen_addresses": [ 00:12:16.044 { 00:12:16.044 "trtype": "TCP", 00:12:16.044 "adrfam": "IPv4", 00:12:16.044 "traddr": "10.0.0.2", 00:12:16.044 "trsvcid": "4420" 00:12:16.044 } 00:12:16.044 ], 00:12:16.044 "allow_any_host": true, 00:12:16.044 "hosts": [], 00:12:16.044 "serial_number": "SPDK00000000000004", 00:12:16.044 "model_number": "SPDK bdev Controller", 00:12:16.044 "max_namespaces": 32, 00:12:16.044 "min_cntlid": 1, 00:12:16.044 "max_cntlid": 65519, 00:12:16.044 "namespaces": [ 00:12:16.044 { 00:12:16.044 "nsid": 1, 00:12:16.044 "bdev_name": "Null4", 00:12:16.044 "name": "Null4", 00:12:16.044 "nguid": "165D0FF1DB6A41B58DBF3417BAAFCE35", 00:12:16.044 "uuid": "165d0ff1-db6a-41b5-8dbf-3417baafce35" 00:12:16.044 } 00:12:16.044 ] 00:12:16.044 } 00:12:16.044 ] 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.044 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.045 rmmod nvme_tcp 00:12:16.045 rmmod nvme_fabrics 00:12:16.045 rmmod nvme_keyring 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 459560 ']' 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 459560 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 459560 ']' 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 459560 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.045 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 459560 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 459560' 00:12:16.304 killing process with pid 459560 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 459560 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 459560 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.304 16:37:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.838 00:12:18.838 real 0m9.367s 00:12:18.838 user 0m5.680s 00:12:18.838 sys 0m4.790s 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.838 ************************************ 00:12:18.838 END TEST nvmf_target_discovery 00:12:18.838 ************************************ 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.838 16:37:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.838 ************************************ 00:12:18.838 START TEST nvmf_referrals 00:12:18.838 ************************************ 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.838 * Looking for test storage... 00:12:18.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.838 --rc genhtml_branch_coverage=1 00:12:18.838 --rc genhtml_function_coverage=1 00:12:18.838 --rc genhtml_legend=1 00:12:18.838 --rc geninfo_all_blocks=1 00:12:18.838 --rc geninfo_unexecuted_blocks=1 00:12:18.838 00:12:18.838 ' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.838 --rc genhtml_branch_coverage=1 00:12:18.838 --rc genhtml_function_coverage=1 00:12:18.838 --rc genhtml_legend=1 00:12:18.838 --rc geninfo_all_blocks=1 00:12:18.838 --rc geninfo_unexecuted_blocks=1 00:12:18.838 00:12:18.838 ' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.838 --rc genhtml_branch_coverage=1 00:12:18.838 --rc genhtml_function_coverage=1 00:12:18.838 --rc genhtml_legend=1 00:12:18.838 --rc geninfo_all_blocks=1 00:12:18.838 --rc geninfo_unexecuted_blocks=1 00:12:18.838 00:12:18.838 ' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:18.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.838 --rc genhtml_branch_coverage=1 00:12:18.838 --rc genhtml_function_coverage=1 00:12:18.838 --rc genhtml_legend=1 00:12:18.838 --rc geninfo_all_blocks=1 00:12:18.838 --rc geninfo_unexecuted_blocks=1 00:12:18.838 00:12:18.838 ' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.838 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:18.839 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:18.839 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.839 16:37:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:25.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:25.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:25.403 Found net devices under 0000:86:00.0: cvl_0_0 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:25.403 Found net devices under 0000:86:00.1: cvl_0_1 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:25.403 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.404 16:37:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:12:25.404 00:12:25.404 --- 10.0.0.2 ping statistics --- 00:12:25.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.404 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:12:25.404 00:12:25.404 --- 10.0.0.1 ping statistics --- 00:12:25.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.404 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=463233 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 463233 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 463233 ']' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 [2024-10-14 16:37:29.277041] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:12:25.404 [2024-10-14 16:37:29.277085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.404 [2024-10-14 16:37:29.347983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.404 [2024-10-14 16:37:29.388062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.404 [2024-10-14 16:37:29.388101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.404 [2024-10-14 16:37:29.388108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.404 [2024-10-14 16:37:29.388114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.404 [2024-10-14 16:37:29.388119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.404 [2024-10-14 16:37:29.389703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.404 [2024-10-14 16:37:29.389810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.404 [2024-10-14 16:37:29.389921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.404 [2024-10-14 16:37:29.389921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 [2024-10-14 16:37:29.534757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 [2024-10-14 16:37:29.548091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.404 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.405 16:37:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.663 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.921 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:26.178 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.179 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.179 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.179 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.179 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.436 16:37:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.693 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.950 rmmod nvme_tcp 00:12:26.950 rmmod nvme_fabrics 00:12:26.950 rmmod nvme_keyring 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 463233 ']' 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 463233 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 463233 ']' 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 463233 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.950 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 463233 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 463233' 00:12:27.208 killing process with pid 463233 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 463233 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 463233 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.208 16:37:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.247 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.247 00:12:29.247 real 0m10.830s 00:12:29.247 user 0m12.083s 00:12:29.247 sys 0m5.238s 00:12:29.247 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.247 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.247 ************************************ 00:12:29.247 END TEST nvmf_referrals 00:12:29.247 ************************************ 00:12:29.506 16:37:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.506 16:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.506 16:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.506 16:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.506 ************************************ 00:12:29.506 START TEST nvmf_connect_disconnect 00:12:29.506 ************************************ 00:12:29.506 16:37:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.506 * Looking for test storage... 00:12:29.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:29.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.507 --rc genhtml_branch_coverage=1 00:12:29.507 --rc genhtml_function_coverage=1 00:12:29.507 --rc genhtml_legend=1 00:12:29.507 --rc geninfo_all_blocks=1 00:12:29.507 --rc geninfo_unexecuted_blocks=1 00:12:29.507 00:12:29.507 ' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:29.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.507 --rc genhtml_branch_coverage=1 00:12:29.507 --rc genhtml_function_coverage=1 00:12:29.507 --rc genhtml_legend=1 00:12:29.507 --rc geninfo_all_blocks=1 00:12:29.507 --rc geninfo_unexecuted_blocks=1 00:12:29.507 00:12:29.507 ' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:29.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.507 --rc genhtml_branch_coverage=1 00:12:29.507 --rc genhtml_function_coverage=1 00:12:29.507 --rc genhtml_legend=1 00:12:29.507 --rc geninfo_all_blocks=1 00:12:29.507 --rc geninfo_unexecuted_blocks=1 00:12:29.507 00:12:29.507 ' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:29.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.507 --rc genhtml_branch_coverage=1 00:12:29.507 --rc genhtml_function_coverage=1 00:12:29.507 --rc genhtml_legend=1 00:12:29.507 --rc geninfo_all_blocks=1 00:12:29.507 --rc geninfo_unexecuted_blocks=1 00:12:29.507 00:12:29.507 ' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:29.507 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.508 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.766 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:29.766 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:29.766 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.766 16:37:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.334 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:36.334 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:36.335 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:36.335 Found net devices under 0000:86:00.0: cvl_0_0 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:36.335 Found net devices under 0000:86:00.1: cvl_0_1 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.335 16:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:12:36.335 00:12:36.335 --- 10.0.0.2 ping statistics --- 00:12:36.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.335 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:36.335 00:12:36.335 --- 10.0.0.1 ping statistics --- 00:12:36.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.335 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=467308 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 467308 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 467308 ']' 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.335 [2024-10-14 16:37:40.180789] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:12:36.335 [2024-10-14 16:37:40.180836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.335 [2024-10-14 16:37:40.253657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.335 [2024-10-14 16:37:40.296292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.335 [2024-10-14 16:37:40.296328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.335 [2024-10-14 16:37:40.296335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.335 [2024-10-14 16:37:40.296341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.335 [2024-10-14 16:37:40.296349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.335 [2024-10-14 16:37:40.297951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.335 [2024-10-14 16:37:40.298057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.335 [2024-10-14 16:37:40.298161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.335 [2024-10-14 16:37:40.298163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:36.335 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 [2024-10-14 16:37:40.435541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:36.336 [2024-10-14 16:37:40.513091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:36.336 16:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:39.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.733 rmmod nvme_tcp 00:12:52.733 rmmod nvme_fabrics 00:12:52.733 rmmod nvme_keyring 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 467308 ']' 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 467308 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 467308 ']' 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 467308 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467308 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467308' 00:12:52.733 killing process with pid 467308 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 467308 00:12:52.733 16:37:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 467308 00:12:52.733 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:52.733 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:52.733 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:52.733 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:52.733 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.734 16:37:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.644 00:12:54.644 real 0m25.256s 00:12:54.644 user 1m8.403s 00:12:54.644 sys 0m5.886s 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:54.644 ************************************ 00:12:54.644 END TEST nvmf_connect_disconnect 00:12:54.644 ************************************ 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:54.644 ************************************ 00:12:54.644 START TEST nvmf_multitarget 00:12:54.644 ************************************ 00:12:54.644 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.914 * Looking for test storage... 00:12:54.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:54.914 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.915 --rc genhtml_branch_coverage=1 00:12:54.915 --rc genhtml_function_coverage=1 00:12:54.915 --rc genhtml_legend=1 00:12:54.915 --rc geninfo_all_blocks=1 00:12:54.915 --rc geninfo_unexecuted_blocks=1 00:12:54.915 00:12:54.915 ' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.915 --rc genhtml_branch_coverage=1 00:12:54.915 --rc genhtml_function_coverage=1 00:12:54.915 --rc genhtml_legend=1 00:12:54.915 --rc geninfo_all_blocks=1 00:12:54.915 --rc geninfo_unexecuted_blocks=1 00:12:54.915 00:12:54.915 ' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.915 --rc genhtml_branch_coverage=1 00:12:54.915 --rc genhtml_function_coverage=1 00:12:54.915 --rc genhtml_legend=1 00:12:54.915 --rc geninfo_all_blocks=1 00:12:54.915 --rc geninfo_unexecuted_blocks=1 00:12:54.915 00:12:54.915 ' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:54.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.915 --rc genhtml_branch_coverage=1 00:12:54.915 --rc genhtml_function_coverage=1 00:12:54.915 --rc genhtml_legend=1 00:12:54.915 --rc geninfo_all_blocks=1 00:12:54.915 --rc geninfo_unexecuted_blocks=1 00:12:54.915 00:12:54.915 ' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.915 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:54.916 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:54.916 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.916 16:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:01.480 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:01.480 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:01.480 Found net devices under 0000:86:00.0: cvl_0_0 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:01.480 Found net devices under 0000:86:00.1: cvl_0_1 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.480 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:01.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:13:01.481 00:13:01.481 --- 10.0.0.2 ping statistics --- 00:13:01.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.481 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:01.481 00:13:01.481 --- 10.0.0.1 ping statistics --- 00:13:01.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.481 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=473693 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 473693 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 473693 ']' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:01.481 [2024-10-14 16:38:05.529324] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:13:01.481 [2024-10-14 16:38:05.529374] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.481 [2024-10-14 16:38:05.603303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.481 [2024-10-14 16:38:05.645886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.481 [2024-10-14 16:38:05.645920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.481 [2024-10-14 16:38:05.645927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.481 [2024-10-14 16:38:05.645933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.481 [2024-10-14 16:38:05.645938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.481 [2024-10-14 16:38:05.647565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.481 [2024-10-14 16:38:05.647712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.481 [2024-10-14 16:38:05.647677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.481 [2024-10-14 16:38:05.647713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:01.481 "nvmf_tgt_1" 00:13:01.481 16:38:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:01.481 "nvmf_tgt_2" 00:13:01.481 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:01.481 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:01.738 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:01.738 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:01.738 true 00:13:01.738 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:01.996 true 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.996 rmmod nvme_tcp 00:13:01.996 rmmod nvme_fabrics 00:13:01.996 rmmod nvme_keyring 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 473693 ']' 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 473693 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 473693 ']' 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 473693 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.996 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 473693 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 473693' 00:13:02.255 killing process with pid 473693 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 473693 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 473693 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:02.255 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.256 16:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.790 00:13:04.790 real 0m9.618s 00:13:04.790 user 0m7.063s 00:13:04.790 sys 0m4.962s 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:04.790 ************************************ 00:13:04.790 END TEST nvmf_multitarget 00:13:04.790 ************************************ 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.790 ************************************ 00:13:04.790 START TEST nvmf_rpc 00:13:04.790 ************************************ 00:13:04.790 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:04.790 * Looking for test storage... 00:13:04.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.790 --rc genhtml_branch_coverage=1 00:13:04.790 --rc genhtml_function_coverage=1 00:13:04.790 --rc genhtml_legend=1 00:13:04.790 --rc geninfo_all_blocks=1 00:13:04.790 --rc geninfo_unexecuted_blocks=1 00:13:04.790 00:13:04.790 ' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.790 --rc genhtml_branch_coverage=1 00:13:04.790 --rc genhtml_function_coverage=1 00:13:04.790 --rc genhtml_legend=1 00:13:04.790 --rc geninfo_all_blocks=1 00:13:04.790 --rc geninfo_unexecuted_blocks=1 00:13:04.790 00:13:04.790 ' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.790 --rc genhtml_branch_coverage=1 00:13:04.790 --rc genhtml_function_coverage=1 00:13:04.790 --rc genhtml_legend=1 00:13:04.790 --rc geninfo_all_blocks=1 00:13:04.790 --rc geninfo_unexecuted_blocks=1 00:13:04.790 00:13:04.790 ' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:04.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.790 --rc genhtml_branch_coverage=1 00:13:04.790 --rc genhtml_function_coverage=1 00:13:04.790 --rc genhtml_legend=1 00:13:04.790 --rc geninfo_all_blocks=1 00:13:04.790 --rc geninfo_unexecuted_blocks=1 00:13:04.790 00:13:04.790 ' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.790 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.791 16:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:11.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:11.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:11.362 Found net devices under 0000:86:00.0: cvl_0_0 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.362 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:11.363 Found net devices under 0000:86:00.1: cvl_0_1 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.363 16:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:13:11.363 00:13:11.363 --- 10.0.0.2 ping statistics --- 00:13:11.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.363 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:11.363 00:13:11.363 --- 10.0.0.1 ping statistics --- 00:13:11.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.363 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=477347 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 477347 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 477347 ']' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.363 [2024-10-14 16:38:15.213871] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:13:11.363 [2024-10-14 16:38:15.213915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.363 [2024-10-14 16:38:15.286278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.363 [2024-10-14 16:38:15.328714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.363 [2024-10-14 16:38:15.328752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.363 [2024-10-14 16:38:15.328759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.363 [2024-10-14 16:38:15.328766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.363 [2024-10-14 16:38:15.328770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.363 [2024-10-14 16:38:15.330358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.363 [2024-10-14 16:38:15.330464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.363 [2024-10-14 16:38:15.330559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.363 [2024-10-14 16:38:15.330560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:11.363 "tick_rate": 2100000000, 00:13:11.363 "poll_groups": [ 00:13:11.363 { 00:13:11.363 "name": "nvmf_tgt_poll_group_000", 00:13:11.363 "admin_qpairs": 0, 00:13:11.363 "io_qpairs": 0, 00:13:11.363 "current_admin_qpairs": 0, 00:13:11.363 "current_io_qpairs": 0, 00:13:11.363 "pending_bdev_io": 0, 00:13:11.363 "completed_nvme_io": 0, 00:13:11.363 "transports": [] 00:13:11.363 }, 00:13:11.363 { 00:13:11.363 "name": "nvmf_tgt_poll_group_001", 00:13:11.363 "admin_qpairs": 0, 00:13:11.363 "io_qpairs": 0, 00:13:11.363 "current_admin_qpairs": 0, 00:13:11.363 "current_io_qpairs": 0, 00:13:11.363 "pending_bdev_io": 0, 00:13:11.363 "completed_nvme_io": 0, 00:13:11.363 "transports": [] 00:13:11.363 }, 00:13:11.363 { 00:13:11.363 "name": "nvmf_tgt_poll_group_002", 00:13:11.363 "admin_qpairs": 0, 00:13:11.363 "io_qpairs": 0, 00:13:11.363 "current_admin_qpairs": 0, 00:13:11.363 "current_io_qpairs": 0, 00:13:11.363 "pending_bdev_io": 0, 00:13:11.363 "completed_nvme_io": 0, 00:13:11.363 "transports": [] 00:13:11.363 }, 00:13:11.363 { 00:13:11.363 "name": "nvmf_tgt_poll_group_003", 00:13:11.363 "admin_qpairs": 0, 00:13:11.363 "io_qpairs": 0, 00:13:11.363 "current_admin_qpairs": 0, 00:13:11.363 "current_io_qpairs": 0, 00:13:11.363 "pending_bdev_io": 0, 00:13:11.363 "completed_nvme_io": 0, 00:13:11.363 "transports": [] 00:13:11.363 } 00:13:11.363 ] 00:13:11.363 }' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:11.363 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 [2024-10-14 16:38:15.576356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:11.364 "tick_rate": 2100000000, 00:13:11.364 "poll_groups": [ 00:13:11.364 { 00:13:11.364 "name": "nvmf_tgt_poll_group_000", 00:13:11.364 "admin_qpairs": 0, 00:13:11.364 "io_qpairs": 0, 00:13:11.364 "current_admin_qpairs": 0, 00:13:11.364 "current_io_qpairs": 0, 00:13:11.364 "pending_bdev_io": 0, 00:13:11.364 "completed_nvme_io": 0, 00:13:11.364 "transports": [ 00:13:11.364 { 00:13:11.364 "trtype": "TCP" 00:13:11.364 } 00:13:11.364 ] 00:13:11.364 }, 00:13:11.364 { 00:13:11.364 "name": "nvmf_tgt_poll_group_001", 00:13:11.364 "admin_qpairs": 0, 00:13:11.364 "io_qpairs": 0, 00:13:11.364 "current_admin_qpairs": 0, 00:13:11.364 "current_io_qpairs": 0, 00:13:11.364 "pending_bdev_io": 0, 00:13:11.364 "completed_nvme_io": 0, 00:13:11.364 "transports": [ 00:13:11.364 { 00:13:11.364 "trtype": "TCP" 00:13:11.364 } 00:13:11.364 ] 00:13:11.364 }, 00:13:11.364 { 00:13:11.364 "name": "nvmf_tgt_poll_group_002", 00:13:11.364 "admin_qpairs": 0, 00:13:11.364 "io_qpairs": 0, 00:13:11.364 "current_admin_qpairs": 0, 00:13:11.364 "current_io_qpairs": 0, 00:13:11.364 "pending_bdev_io": 0, 00:13:11.364 "completed_nvme_io": 0, 00:13:11.364 "transports": [ 00:13:11.364 { 00:13:11.364 "trtype": "TCP" 00:13:11.364 } 00:13:11.364 ] 00:13:11.364 }, 00:13:11.364 { 00:13:11.364 "name": "nvmf_tgt_poll_group_003", 00:13:11.364 "admin_qpairs": 0, 00:13:11.364 "io_qpairs": 0, 00:13:11.364 "current_admin_qpairs": 0, 00:13:11.364 "current_io_qpairs": 0, 00:13:11.364 "pending_bdev_io": 0, 00:13:11.364 "completed_nvme_io": 0, 00:13:11.364 "transports": [ 00:13:11.364 { 00:13:11.364 "trtype": "TCP" 00:13:11.364 } 00:13:11.364 ] 00:13:11.364 } 00:13:11.364 ] 00:13:11.364 }' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 Malloc1 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 [2024-10-14 16:38:15.754059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:11.364 [2024-10-14 16:38:15.782676] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:11.364 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:11.364 could not add new controller: failed to write to nvme-fabrics device 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.364 16:38:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.298 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.298 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.298 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.298 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.298 16:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:14.826 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.826 [2024-10-14 16:38:19.094363] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:14.826 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:14.826 could not add new controller: failed to write to nvme-fabrics device 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.826 16:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.758 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.758 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.758 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.758 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:15.758 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:17.659 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.918 [2024-10-14 16:38:22.408696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.918 16:38:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.291 16:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.291 16:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.291 16:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.291 16:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:19.291 16:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 [2024-10-14 16:38:25.752195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.189 16:38:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.560 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.560 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.560 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.560 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:22.560 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:24.457 16:38:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.457 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.458 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.458 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.715 [2024-10-14 16:38:29.108245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.715 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.088 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.088 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.088 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.088 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:26.088 16:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 [2024-10-14 16:38:32.459817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.989 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.359 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.359 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:29.359 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.359 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:29.359 16:38:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:31.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.257 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 [2024-10-14 16:38:35.770007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:32.630 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:32.631 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.631 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:32.631 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:32.631 16:38:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:34.530 16:38:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 [2024-10-14 16:38:39.082833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 [2024-10-14 16:38:39.130949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.530 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.531 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.789 [2024-10-14 16:38:39.179089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.789 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 [2024-10-14 16:38:39.227235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 [2024-10-14 16:38:39.275404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:34.790 "tick_rate": 2100000000, 00:13:34.790 "poll_groups": [ 00:13:34.790 { 00:13:34.790 "name": "nvmf_tgt_poll_group_000", 00:13:34.790 "admin_qpairs": 2, 00:13:34.790 "io_qpairs": 168, 00:13:34.790 "current_admin_qpairs": 0, 00:13:34.790 "current_io_qpairs": 0, 00:13:34.790 "pending_bdev_io": 0, 00:13:34.790 "completed_nvme_io": 218, 00:13:34.790 "transports": [ 00:13:34.790 { 00:13:34.790 "trtype": "TCP" 00:13:34.790 } 00:13:34.790 ] 00:13:34.790 }, 00:13:34.790 { 00:13:34.790 "name": "nvmf_tgt_poll_group_001", 00:13:34.790 "admin_qpairs": 2, 00:13:34.790 "io_qpairs": 168, 00:13:34.790 "current_admin_qpairs": 0, 00:13:34.790 "current_io_qpairs": 0, 00:13:34.790 "pending_bdev_io": 0, 00:13:34.790 "completed_nvme_io": 267, 00:13:34.790 "transports": [ 00:13:34.790 { 00:13:34.790 "trtype": "TCP" 00:13:34.790 } 00:13:34.790 ] 00:13:34.790 }, 00:13:34.790 { 00:13:34.790 "name": "nvmf_tgt_poll_group_002", 00:13:34.790 "admin_qpairs": 1, 00:13:34.790 "io_qpairs": 168, 00:13:34.790 "current_admin_qpairs": 0, 00:13:34.790 "current_io_qpairs": 0, 00:13:34.790 "pending_bdev_io": 0, 00:13:34.790 "completed_nvme_io": 318, 00:13:34.790 "transports": [ 00:13:34.790 { 00:13:34.790 "trtype": "TCP" 00:13:34.790 } 00:13:34.790 ] 00:13:34.790 }, 00:13:34.790 { 00:13:34.790 "name": "nvmf_tgt_poll_group_003", 00:13:34.790 "admin_qpairs": 2, 00:13:34.790 "io_qpairs": 168, 00:13:34.790 "current_admin_qpairs": 0, 00:13:34.790 "current_io_qpairs": 0, 00:13:34.790 "pending_bdev_io": 0, 00:13:34.790 "completed_nvme_io": 219, 00:13:34.790 "transports": [ 00:13:34.790 { 00:13:34.790 "trtype": "TCP" 00:13:34.790 } 00:13:34.790 ] 00:13:34.790 } 00:13:34.790 ] 00:13:34.790 }' 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:34.790 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:34.791 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:35.050 rmmod nvme_tcp 00:13:35.050 rmmod nvme_fabrics 00:13:35.050 rmmod nvme_keyring 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 477347 ']' 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 477347 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 477347 ']' 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 477347 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477347 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477347' 00:13:35.050 killing process with pid 477347 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 477347 00:13:35.050 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 477347 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.309 16:38:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:37.412 00:13:37.412 real 0m32.853s 00:13:37.412 user 1m38.924s 00:13:37.412 sys 0m6.495s 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.412 ************************************ 00:13:37.412 END TEST nvmf_rpc 00:13:37.412 ************************************ 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.412 ************************************ 00:13:37.412 START TEST nvmf_invalid 00:13:37.412 ************************************ 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:37.412 * Looking for test storage... 00:13:37.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:37.412 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:37.412 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:37.412 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.412 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.412 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.412 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.413 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:37.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.672 --rc genhtml_branch_coverage=1 00:13:37.672 --rc genhtml_function_coverage=1 00:13:37.672 --rc genhtml_legend=1 00:13:37.672 --rc geninfo_all_blocks=1 00:13:37.672 --rc geninfo_unexecuted_blocks=1 00:13:37.672 00:13:37.672 ' 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:37.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.672 --rc genhtml_branch_coverage=1 00:13:37.672 --rc genhtml_function_coverage=1 00:13:37.672 --rc genhtml_legend=1 00:13:37.672 --rc geninfo_all_blocks=1 00:13:37.672 --rc geninfo_unexecuted_blocks=1 00:13:37.672 00:13:37.672 ' 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:37.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.672 --rc genhtml_branch_coverage=1 00:13:37.672 --rc genhtml_function_coverage=1 00:13:37.672 --rc genhtml_legend=1 00:13:37.672 --rc geninfo_all_blocks=1 00:13:37.672 --rc geninfo_unexecuted_blocks=1 00:13:37.672 00:13:37.672 ' 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:37.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.672 --rc genhtml_branch_coverage=1 00:13:37.672 --rc genhtml_function_coverage=1 00:13:37.672 --rc genhtml_legend=1 00:13:37.672 --rc geninfo_all_blocks=1 00:13:37.672 --rc geninfo_unexecuted_blocks=1 00:13:37.672 00:13:37.672 ' 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.672 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:37.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:37.673 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:44.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:44.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:44.244 Found net devices under 0000:86:00.0: cvl_0_0 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:44.244 Found net devices under 0000:86:00.1: cvl_0_1 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:44.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:13:44.244 00:13:44.244 --- 10.0.0.2 ping statistics --- 00:13:44.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.244 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:13:44.244 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:44.244 00:13:44.244 --- 10.0.0.1 ping statistics --- 00:13:44.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.244 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:44.245 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=485087 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 485087 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 485087 ']' 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:44.245 [2024-10-14 16:38:48.092889] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:13:44.245 [2024-10-14 16:38:48.092936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.245 [2024-10-14 16:38:48.165392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.245 [2024-10-14 16:38:48.208199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.245 [2024-10-14 16:38:48.208234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.245 [2024-10-14 16:38:48.208241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.245 [2024-10-14 16:38:48.208248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.245 [2024-10-14 16:38:48.208253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.245 [2024-10-14 16:38:48.209766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.245 [2024-10-14 16:38:48.209877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.245 [2024-10-14 16:38:48.209984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.245 [2024-10-14 16:38:48.209985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9728 00:13:44.245 [2024-10-14 16:38:48.523521] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:44.245 { 00:13:44.245 "nqn": "nqn.2016-06.io.spdk:cnode9728", 00:13:44.245 "tgt_name": "foobar", 00:13:44.245 "method": "nvmf_create_subsystem", 00:13:44.245 "req_id": 1 00:13:44.245 } 00:13:44.245 Got JSON-RPC error response 00:13:44.245 response: 00:13:44.245 { 00:13:44.245 "code": -32603, 00:13:44.245 "message": "Unable to find target foobar" 00:13:44.245 }' 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:44.245 { 00:13:44.245 "nqn": "nqn.2016-06.io.spdk:cnode9728", 00:13:44.245 "tgt_name": "foobar", 00:13:44.245 "method": "nvmf_create_subsystem", 00:13:44.245 "req_id": 1 00:13:44.245 } 00:13:44.245 Got JSON-RPC error response 00:13:44.245 response: 00:13:44.245 { 00:13:44.245 "code": -32603, 00:13:44.245 "message": "Unable to find target foobar" 00:13:44.245 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20822 00:13:44.245 [2024-10-14 16:38:48.732280] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20822: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:44.245 { 00:13:44.245 "nqn": "nqn.2016-06.io.spdk:cnode20822", 00:13:44.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:44.245 "method": "nvmf_create_subsystem", 00:13:44.245 "req_id": 1 00:13:44.245 } 00:13:44.245 Got JSON-RPC error response 00:13:44.245 response: 00:13:44.245 { 00:13:44.245 "code": -32602, 00:13:44.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:44.245 }' 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:44.245 { 00:13:44.245 "nqn": "nqn.2016-06.io.spdk:cnode20822", 00:13:44.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:44.245 "method": "nvmf_create_subsystem", 00:13:44.245 "req_id": 1 00:13:44.245 } 00:13:44.245 Got JSON-RPC error response 00:13:44.245 response: 00:13:44.245 { 00:13:44.245 "code": -32602, 00:13:44.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:44.245 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:44.245 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27817 00:13:44.504 [2024-10-14 16:38:48.940919] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27817: invalid model number 'SPDK_Controller' 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:44.504 { 00:13:44.504 "nqn": "nqn.2016-06.io.spdk:cnode27817", 00:13:44.504 "model_number": "SPDK_Controller\u001f", 00:13:44.504 "method": "nvmf_create_subsystem", 00:13:44.504 "req_id": 1 00:13:44.504 } 00:13:44.504 Got JSON-RPC error response 00:13:44.504 response: 00:13:44.504 { 00:13:44.504 "code": -32602, 00:13:44.504 "message": "Invalid MN SPDK_Controller\u001f" 00:13:44.504 }' 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:44.504 { 00:13:44.504 "nqn": "nqn.2016-06.io.spdk:cnode27817", 00:13:44.504 "model_number": "SPDK_Controller\u001f", 00:13:44.504 "method": "nvmf_create_subsystem", 00:13:44.504 "req_id": 1 00:13:44.504 } 00:13:44.504 Got JSON-RPC error response 00:13:44.504 response: 00:13:44.504 { 00:13:44.504 "code": -32602, 00:13:44.504 "message": "Invalid MN SPDK_Controller\u001f" 00:13:44.504 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:44.504 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:44.505 16:38:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '.I*9"|u;"0>S;<0'\''t>nQ' 00:13:44.505 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '.I*9"|u;"0>S;<0'\''t>nQ' nqn.2016-06.io.spdk:cnode1894 00:13:44.764 [2024-10-14 16:38:49.290154] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1894: invalid serial number '.I*9"|u;"0>S;<0't>nQ' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:44.764 { 00:13:44.764 "nqn": "nqn.2016-06.io.spdk:cnode1894", 00:13:44.764 "serial_number": ".I*9\u007f\"|u;\"0>S;<0'\''t>nQ", 00:13:44.764 "method": "nvmf_create_subsystem", 00:13:44.764 "req_id": 1 00:13:44.764 } 00:13:44.764 Got JSON-RPC error response 00:13:44.764 response: 00:13:44.764 { 00:13:44.764 "code": -32602, 00:13:44.764 "message": "Invalid SN .I*9\u007f\"|u;\"0>S;<0'\''t>nQ" 00:13:44.764 }' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:44.764 { 00:13:44.764 "nqn": "nqn.2016-06.io.spdk:cnode1894", 00:13:44.764 "serial_number": ".I*9\u007f\"|u;\"0>S;<0't>nQ", 00:13:44.764 "method": "nvmf_create_subsystem", 00:13:44.764 "req_id": 1 00:13:44.764 } 00:13:44.764 Got JSON-RPC error response 00:13:44.764 response: 00:13:44.764 { 00:13:44.764 "code": -32602, 00:13:44.764 "message": "Invalid SN .I*9\u007f\"|u;\"0>S;<0't>nQ" 00:13:44.764 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:44.764 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:44.765 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:45.023 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.024 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:13:45.025 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p/1R+y /dev/null' 00:13:47.353 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:49.261 00:13:49.261 real 0m11.973s 00:13:49.261 user 0m18.419s 00:13:49.261 sys 0m5.426s 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:49.261 ************************************ 00:13:49.261 END TEST nvmf_invalid 00:13:49.261 ************************************ 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:49.261 16:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.520 ************************************ 00:13:49.520 START TEST nvmf_connect_stress 00:13:49.520 ************************************ 00:13:49.520 16:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:49.520 * Looking for test storage... 00:13:49.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:49.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.520 --rc genhtml_branch_coverage=1 00:13:49.520 --rc genhtml_function_coverage=1 00:13:49.520 --rc genhtml_legend=1 00:13:49.520 --rc geninfo_all_blocks=1 00:13:49.520 --rc geninfo_unexecuted_blocks=1 00:13:49.520 00:13:49.520 ' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:49.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.520 --rc genhtml_branch_coverage=1 00:13:49.520 --rc genhtml_function_coverage=1 00:13:49.520 --rc genhtml_legend=1 00:13:49.520 --rc geninfo_all_blocks=1 00:13:49.520 --rc geninfo_unexecuted_blocks=1 00:13:49.520 00:13:49.520 ' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:49.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.520 --rc genhtml_branch_coverage=1 00:13:49.520 --rc genhtml_function_coverage=1 00:13:49.520 --rc genhtml_legend=1 00:13:49.520 --rc geninfo_all_blocks=1 00:13:49.520 --rc geninfo_unexecuted_blocks=1 00:13:49.520 00:13:49.520 ' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:49.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.520 --rc genhtml_branch_coverage=1 00:13:49.520 --rc genhtml_function_coverage=1 00:13:49.520 --rc genhtml_legend=1 00:13:49.520 --rc geninfo_all_blocks=1 00:13:49.520 --rc geninfo_unexecuted_blocks=1 00:13:49.520 00:13:49.520 ' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.520 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.521 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:56.092 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:56.092 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:56.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:56.093 Found net devices under 0000:86:00.0: cvl_0_0 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:56.093 Found net devices under 0000:86:00.1: cvl_0_1 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.093 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:13:56.093 00:13:56.093 --- 10.0.0.2 ping statistics --- 00:13:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.093 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:56.093 00:13:56.093 --- 10.0.0.1 ping statistics --- 00:13:56.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.093 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=489274 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 489274 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 489274 ']' 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.093 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.093 [2024-10-14 16:39:00.143159] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:13:56.093 [2024-10-14 16:39:00.143205] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.093 [2024-10-14 16:39:00.214751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.093 [2024-10-14 16:39:00.258428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.093 [2024-10-14 16:39:00.258460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.093 [2024-10-14 16:39:00.258469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.093 [2024-10-14 16:39:00.258476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.093 [2024-10-14 16:39:00.258484] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.093 [2024-10-14 16:39:00.259918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.094 [2024-10-14 16:39:00.260003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.094 [2024-10-14 16:39:00.260003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.094 [2024-10-14 16:39:00.407514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.094 [2024-10-14 16:39:00.428024] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.094 NULL1 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=489435 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.094 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.353 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.353 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:56.353 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.353 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.353 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.612 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.612 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:56.612 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.612 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.612 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.870 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.870 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:56.870 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.870 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.870 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.437 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.437 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:57.437 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.437 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.437 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.695 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.695 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:57.695 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.695 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.695 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.953 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.954 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:57.954 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.954 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.954 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.212 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.212 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:58.212 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.212 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.212 16:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.778 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.778 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:58.778 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.778 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.778 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.037 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.037 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:59.037 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.037 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.037 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.295 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.296 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:59.296 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.296 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.296 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.554 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.554 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:59.554 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.554 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.554 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.812 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.812 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:13:59.812 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.812 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.812 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.379 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.379 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:00.379 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.379 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.379 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.638 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.638 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:00.638 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.638 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.638 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.896 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.896 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:00.896 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.896 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.896 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.154 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.154 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:01.154 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.154 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.154 16:39:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.721 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.721 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:01.721 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.721 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.721 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.979 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.979 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:01.979 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.979 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.979 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.238 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.238 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:02.238 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.238 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.238 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.497 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.497 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:02.497 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.497 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.497 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.756 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.756 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:02.756 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.756 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.756 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.324 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.324 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:03.324 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.324 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.324 16:39:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.582 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.582 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:03.582 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.582 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.582 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.840 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.840 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:03.840 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.840 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.840 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.098 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.098 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:04.098 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.098 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.098 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.357 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.615 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:04.615 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.616 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.616 16:39:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.874 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.874 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:04.874 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.874 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.874 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.132 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.132 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:05.132 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.132 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.132 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.391 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.391 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:05.391 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.391 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.391 16:39:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.959 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.959 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:05.959 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.959 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.959 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.959 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 489435 00:14:06.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (489435) - No such process 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 489435 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.218 rmmod nvme_tcp 00:14:06.218 rmmod nvme_fabrics 00:14:06.218 rmmod nvme_keyring 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 489274 ']' 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 489274 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 489274 ']' 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 489274 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 489274 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 489274' 00:14:06.218 killing process with pid 489274 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 489274 00:14:06.218 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 489274 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.478 16:39:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.380 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.380 00:14:08.380 real 0m19.069s 00:14:08.380 user 0m39.454s 00:14:08.380 sys 0m8.537s 00:14:08.380 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.380 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.380 ************************************ 00:14:08.380 END TEST nvmf_connect_stress 00:14:08.380 ************************************ 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.640 ************************************ 00:14:08.640 START TEST nvmf_fused_ordering 00:14:08.640 ************************************ 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:08.640 * Looking for test storage... 00:14:08.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.640 --rc genhtml_branch_coverage=1 00:14:08.640 --rc genhtml_function_coverage=1 00:14:08.640 --rc genhtml_legend=1 00:14:08.640 --rc geninfo_all_blocks=1 00:14:08.640 --rc geninfo_unexecuted_blocks=1 00:14:08.640 00:14:08.640 ' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.640 --rc genhtml_branch_coverage=1 00:14:08.640 --rc genhtml_function_coverage=1 00:14:08.640 --rc genhtml_legend=1 00:14:08.640 --rc geninfo_all_blocks=1 00:14:08.640 --rc geninfo_unexecuted_blocks=1 00:14:08.640 00:14:08.640 ' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.640 --rc genhtml_branch_coverage=1 00:14:08.640 --rc genhtml_function_coverage=1 00:14:08.640 --rc genhtml_legend=1 00:14:08.640 --rc geninfo_all_blocks=1 00:14:08.640 --rc geninfo_unexecuted_blocks=1 00:14:08.640 00:14:08.640 ' 00:14:08.640 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.640 --rc genhtml_branch_coverage=1 00:14:08.641 --rc genhtml_function_coverage=1 00:14:08.641 --rc genhtml_legend=1 00:14:08.641 --rc geninfo_all_blocks=1 00:14:08.641 --rc geninfo_unexecuted_blocks=1 00:14:08.641 00:14:08.641 ' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.641 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:08.900 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:08.900 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.900 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.472 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.472 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.472 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.472 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.472 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:15.473 16:39:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:15.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:14:15.473 00:14:15.473 --- 10.0.0.2 ping statistics --- 00:14:15.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.473 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:14:15.473 00:14:15.473 --- 10.0.0.1 ping statistics --- 00:14:15.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.473 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=495185 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 495185 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 495185 ']' 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 [2024-10-14 16:39:19.338899] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:15.473 [2024-10-14 16:39:19.338941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.473 [2024-10-14 16:39:19.408247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.473 [2024-10-14 16:39:19.448260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.473 [2024-10-14 16:39:19.448296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.473 [2024-10-14 16:39:19.448303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.473 [2024-10-14 16:39:19.448309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.473 [2024-10-14 16:39:19.448314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.473 [2024-10-14 16:39:19.448875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 [2024-10-14 16:39:19.593869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 [2024-10-14 16:39:19.614083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 NULL1 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.473 16:39:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:15.473 [2024-10-14 16:39:19.669945] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:15.473 [2024-10-14 16:39:19.669977] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495206 ] 00:14:15.733 Attached to nqn.2016-06.io.spdk:cnode1 00:14:15.733 Namespace ID: 1 size: 1GB 00:14:15.733 fused_ordering(0) 00:14:15.733 fused_ordering(1) 00:14:15.733 fused_ordering(2) 00:14:15.733 fused_ordering(3) 00:14:15.733 fused_ordering(4) 00:14:15.733 fused_ordering(5) 00:14:15.733 fused_ordering(6) 00:14:15.733 fused_ordering(7) 00:14:15.733 fused_ordering(8) 00:14:15.733 fused_ordering(9) 00:14:15.733 fused_ordering(10) 00:14:15.733 fused_ordering(11) 00:14:15.733 fused_ordering(12) 00:14:15.733 fused_ordering(13) 00:14:15.733 fused_ordering(14) 00:14:15.733 fused_ordering(15) 00:14:15.733 fused_ordering(16) 00:14:15.733 fused_ordering(17) 00:14:15.733 fused_ordering(18) 00:14:15.733 fused_ordering(19) 00:14:15.733 fused_ordering(20) 00:14:15.733 fused_ordering(21) 00:14:15.733 fused_ordering(22) 00:14:15.733 fused_ordering(23) 00:14:15.733 fused_ordering(24) 00:14:15.733 fused_ordering(25) 00:14:15.733 fused_ordering(26) 00:14:15.733 fused_ordering(27) 00:14:15.733 fused_ordering(28) 00:14:15.733 fused_ordering(29) 00:14:15.733 fused_ordering(30) 00:14:15.733 fused_ordering(31) 00:14:15.733 fused_ordering(32) 00:14:15.733 fused_ordering(33) 00:14:15.733 fused_ordering(34) 00:14:15.733 fused_ordering(35) 00:14:15.733 fused_ordering(36) 00:14:15.733 fused_ordering(37) 00:14:15.733 fused_ordering(38) 00:14:15.733 fused_ordering(39) 00:14:15.733 fused_ordering(40) 00:14:15.733 fused_ordering(41) 00:14:15.733 fused_ordering(42) 00:14:15.733 fused_ordering(43) 00:14:15.733 fused_ordering(44) 00:14:15.733 fused_ordering(45) 00:14:15.733 fused_ordering(46) 00:14:15.733 fused_ordering(47) 00:14:15.733 fused_ordering(48) 00:14:15.733 fused_ordering(49) 00:14:15.733 fused_ordering(50) 00:14:15.733 fused_ordering(51) 00:14:15.733 fused_ordering(52) 00:14:15.733 fused_ordering(53) 00:14:15.733 fused_ordering(54) 00:14:15.733 fused_ordering(55) 00:14:15.733 fused_ordering(56) 00:14:15.733 fused_ordering(57) 00:14:15.733 fused_ordering(58) 00:14:15.733 fused_ordering(59) 00:14:15.733 fused_ordering(60) 00:14:15.733 fused_ordering(61) 00:14:15.733 fused_ordering(62) 00:14:15.733 fused_ordering(63) 00:14:15.733 fused_ordering(64) 00:14:15.733 fused_ordering(65) 00:14:15.733 fused_ordering(66) 00:14:15.733 fused_ordering(67) 00:14:15.733 fused_ordering(68) 00:14:15.733 fused_ordering(69) 00:14:15.733 fused_ordering(70) 00:14:15.733 fused_ordering(71) 00:14:15.733 fused_ordering(72) 00:14:15.733 fused_ordering(73) 00:14:15.733 fused_ordering(74) 00:14:15.733 fused_ordering(75) 00:14:15.733 fused_ordering(76) 00:14:15.733 fused_ordering(77) 00:14:15.733 fused_ordering(78) 00:14:15.733 fused_ordering(79) 00:14:15.733 fused_ordering(80) 00:14:15.733 fused_ordering(81) 00:14:15.733 fused_ordering(82) 00:14:15.733 fused_ordering(83) 00:14:15.733 fused_ordering(84) 00:14:15.733 fused_ordering(85) 00:14:15.733 fused_ordering(86) 00:14:15.733 fused_ordering(87) 00:14:15.733 fused_ordering(88) 00:14:15.733 fused_ordering(89) 00:14:15.733 fused_ordering(90) 00:14:15.733 fused_ordering(91) 00:14:15.733 fused_ordering(92) 00:14:15.733 fused_ordering(93) 00:14:15.733 fused_ordering(94) 00:14:15.733 fused_ordering(95) 00:14:15.733 fused_ordering(96) 00:14:15.733 fused_ordering(97) 00:14:15.733 fused_ordering(98) 00:14:15.733 fused_ordering(99) 00:14:15.733 fused_ordering(100) 00:14:15.733 fused_ordering(101) 00:14:15.733 fused_ordering(102) 00:14:15.733 fused_ordering(103) 00:14:15.733 fused_ordering(104) 00:14:15.733 fused_ordering(105) 00:14:15.733 fused_ordering(106) 00:14:15.733 fused_ordering(107) 00:14:15.733 fused_ordering(108) 00:14:15.733 fused_ordering(109) 00:14:15.733 fused_ordering(110) 00:14:15.733 fused_ordering(111) 00:14:15.733 fused_ordering(112) 00:14:15.733 fused_ordering(113) 00:14:15.733 fused_ordering(114) 00:14:15.733 fused_ordering(115) 00:14:15.733 fused_ordering(116) 00:14:15.733 fused_ordering(117) 00:14:15.733 fused_ordering(118) 00:14:15.733 fused_ordering(119) 00:14:15.733 fused_ordering(120) 00:14:15.733 fused_ordering(121) 00:14:15.733 fused_ordering(122) 00:14:15.733 fused_ordering(123) 00:14:15.733 fused_ordering(124) 00:14:15.733 fused_ordering(125) 00:14:15.733 fused_ordering(126) 00:14:15.733 fused_ordering(127) 00:14:15.733 fused_ordering(128) 00:14:15.733 fused_ordering(129) 00:14:15.733 fused_ordering(130) 00:14:15.733 fused_ordering(131) 00:14:15.733 fused_ordering(132) 00:14:15.733 fused_ordering(133) 00:14:15.733 fused_ordering(134) 00:14:15.733 fused_ordering(135) 00:14:15.733 fused_ordering(136) 00:14:15.733 fused_ordering(137) 00:14:15.733 fused_ordering(138) 00:14:15.733 fused_ordering(139) 00:14:15.733 fused_ordering(140) 00:14:15.733 fused_ordering(141) 00:14:15.733 fused_ordering(142) 00:14:15.733 fused_ordering(143) 00:14:15.733 fused_ordering(144) 00:14:15.733 fused_ordering(145) 00:14:15.733 fused_ordering(146) 00:14:15.733 fused_ordering(147) 00:14:15.733 fused_ordering(148) 00:14:15.733 fused_ordering(149) 00:14:15.733 fused_ordering(150) 00:14:15.733 fused_ordering(151) 00:14:15.733 fused_ordering(152) 00:14:15.733 fused_ordering(153) 00:14:15.733 fused_ordering(154) 00:14:15.733 fused_ordering(155) 00:14:15.733 fused_ordering(156) 00:14:15.733 fused_ordering(157) 00:14:15.733 fused_ordering(158) 00:14:15.733 fused_ordering(159) 00:14:15.733 fused_ordering(160) 00:14:15.733 fused_ordering(161) 00:14:15.733 fused_ordering(162) 00:14:15.733 fused_ordering(163) 00:14:15.733 fused_ordering(164) 00:14:15.733 fused_ordering(165) 00:14:15.733 fused_ordering(166) 00:14:15.733 fused_ordering(167) 00:14:15.733 fused_ordering(168) 00:14:15.733 fused_ordering(169) 00:14:15.733 fused_ordering(170) 00:14:15.733 fused_ordering(171) 00:14:15.733 fused_ordering(172) 00:14:15.733 fused_ordering(173) 00:14:15.733 fused_ordering(174) 00:14:15.733 fused_ordering(175) 00:14:15.733 fused_ordering(176) 00:14:15.733 fused_ordering(177) 00:14:15.733 fused_ordering(178) 00:14:15.733 fused_ordering(179) 00:14:15.733 fused_ordering(180) 00:14:15.733 fused_ordering(181) 00:14:15.733 fused_ordering(182) 00:14:15.733 fused_ordering(183) 00:14:15.733 fused_ordering(184) 00:14:15.733 fused_ordering(185) 00:14:15.733 fused_ordering(186) 00:14:15.733 fused_ordering(187) 00:14:15.733 fused_ordering(188) 00:14:15.733 fused_ordering(189) 00:14:15.733 fused_ordering(190) 00:14:15.733 fused_ordering(191) 00:14:15.733 fused_ordering(192) 00:14:15.733 fused_ordering(193) 00:14:15.733 fused_ordering(194) 00:14:15.733 fused_ordering(195) 00:14:15.733 fused_ordering(196) 00:14:15.733 fused_ordering(197) 00:14:15.733 fused_ordering(198) 00:14:15.733 fused_ordering(199) 00:14:15.733 fused_ordering(200) 00:14:15.733 fused_ordering(201) 00:14:15.733 fused_ordering(202) 00:14:15.733 fused_ordering(203) 00:14:15.733 fused_ordering(204) 00:14:15.733 fused_ordering(205) 00:14:15.733 fused_ordering(206) 00:14:15.733 fused_ordering(207) 00:14:15.733 fused_ordering(208) 00:14:15.733 fused_ordering(209) 00:14:15.733 fused_ordering(210) 00:14:15.733 fused_ordering(211) 00:14:15.733 fused_ordering(212) 00:14:15.733 fused_ordering(213) 00:14:15.733 fused_ordering(214) 00:14:15.733 fused_ordering(215) 00:14:15.733 fused_ordering(216) 00:14:15.734 fused_ordering(217) 00:14:15.734 fused_ordering(218) 00:14:15.734 fused_ordering(219) 00:14:15.734 fused_ordering(220) 00:14:15.734 fused_ordering(221) 00:14:15.734 fused_ordering(222) 00:14:15.734 fused_ordering(223) 00:14:15.734 fused_ordering(224) 00:14:15.734 fused_ordering(225) 00:14:15.734 fused_ordering(226) 00:14:15.734 fused_ordering(227) 00:14:15.734 fused_ordering(228) 00:14:15.734 fused_ordering(229) 00:14:15.734 fused_ordering(230) 00:14:15.734 fused_ordering(231) 00:14:15.734 fused_ordering(232) 00:14:15.734 fused_ordering(233) 00:14:15.734 fused_ordering(234) 00:14:15.734 fused_ordering(235) 00:14:15.734 fused_ordering(236) 00:14:15.734 fused_ordering(237) 00:14:15.734 fused_ordering(238) 00:14:15.734 fused_ordering(239) 00:14:15.734 fused_ordering(240) 00:14:15.734 fused_ordering(241) 00:14:15.734 fused_ordering(242) 00:14:15.734 fused_ordering(243) 00:14:15.734 fused_ordering(244) 00:14:15.734 fused_ordering(245) 00:14:15.734 fused_ordering(246) 00:14:15.734 fused_ordering(247) 00:14:15.734 fused_ordering(248) 00:14:15.734 fused_ordering(249) 00:14:15.734 fused_ordering(250) 00:14:15.734 fused_ordering(251) 00:14:15.734 fused_ordering(252) 00:14:15.734 fused_ordering(253) 00:14:15.734 fused_ordering(254) 00:14:15.734 fused_ordering(255) 00:14:15.734 fused_ordering(256) 00:14:15.734 fused_ordering(257) 00:14:15.734 fused_ordering(258) 00:14:15.734 fused_ordering(259) 00:14:15.734 fused_ordering(260) 00:14:15.734 fused_ordering(261) 00:14:15.734 fused_ordering(262) 00:14:15.734 fused_ordering(263) 00:14:15.734 fused_ordering(264) 00:14:15.734 fused_ordering(265) 00:14:15.734 fused_ordering(266) 00:14:15.734 fused_ordering(267) 00:14:15.734 fused_ordering(268) 00:14:15.734 fused_ordering(269) 00:14:15.734 fused_ordering(270) 00:14:15.734 fused_ordering(271) 00:14:15.734 fused_ordering(272) 00:14:15.734 fused_ordering(273) 00:14:15.734 fused_ordering(274) 00:14:15.734 fused_ordering(275) 00:14:15.734 fused_ordering(276) 00:14:15.734 fused_ordering(277) 00:14:15.734 fused_ordering(278) 00:14:15.734 fused_ordering(279) 00:14:15.734 fused_ordering(280) 00:14:15.734 fused_ordering(281) 00:14:15.734 fused_ordering(282) 00:14:15.734 fused_ordering(283) 00:14:15.734 fused_ordering(284) 00:14:15.734 fused_ordering(285) 00:14:15.734 fused_ordering(286) 00:14:15.734 fused_ordering(287) 00:14:15.734 fused_ordering(288) 00:14:15.734 fused_ordering(289) 00:14:15.734 fused_ordering(290) 00:14:15.734 fused_ordering(291) 00:14:15.734 fused_ordering(292) 00:14:15.734 fused_ordering(293) 00:14:15.734 fused_ordering(294) 00:14:15.734 fused_ordering(295) 00:14:15.734 fused_ordering(296) 00:14:15.734 fused_ordering(297) 00:14:15.734 fused_ordering(298) 00:14:15.734 fused_ordering(299) 00:14:15.734 fused_ordering(300) 00:14:15.734 fused_ordering(301) 00:14:15.734 fused_ordering(302) 00:14:15.734 fused_ordering(303) 00:14:15.734 fused_ordering(304) 00:14:15.734 fused_ordering(305) 00:14:15.734 fused_ordering(306) 00:14:15.734 fused_ordering(307) 00:14:15.734 fused_ordering(308) 00:14:15.734 fused_ordering(309) 00:14:15.734 fused_ordering(310) 00:14:15.734 fused_ordering(311) 00:14:15.734 fused_ordering(312) 00:14:15.734 fused_ordering(313) 00:14:15.734 fused_ordering(314) 00:14:15.734 fused_ordering(315) 00:14:15.734 fused_ordering(316) 00:14:15.734 fused_ordering(317) 00:14:15.734 fused_ordering(318) 00:14:15.734 fused_ordering(319) 00:14:15.734 fused_ordering(320) 00:14:15.734 fused_ordering(321) 00:14:15.734 fused_ordering(322) 00:14:15.734 fused_ordering(323) 00:14:15.734 fused_ordering(324) 00:14:15.734 fused_ordering(325) 00:14:15.734 fused_ordering(326) 00:14:15.734 fused_ordering(327) 00:14:15.734 fused_ordering(328) 00:14:15.734 fused_ordering(329) 00:14:15.734 fused_ordering(330) 00:14:15.734 fused_ordering(331) 00:14:15.734 fused_ordering(332) 00:14:15.734 fused_ordering(333) 00:14:15.734 fused_ordering(334) 00:14:15.734 fused_ordering(335) 00:14:15.734 fused_ordering(336) 00:14:15.734 fused_ordering(337) 00:14:15.734 fused_ordering(338) 00:14:15.734 fused_ordering(339) 00:14:15.734 fused_ordering(340) 00:14:15.734 fused_ordering(341) 00:14:15.734 fused_ordering(342) 00:14:15.734 fused_ordering(343) 00:14:15.734 fused_ordering(344) 00:14:15.734 fused_ordering(345) 00:14:15.734 fused_ordering(346) 00:14:15.734 fused_ordering(347) 00:14:15.734 fused_ordering(348) 00:14:15.734 fused_ordering(349) 00:14:15.734 fused_ordering(350) 00:14:15.734 fused_ordering(351) 00:14:15.734 fused_ordering(352) 00:14:15.734 fused_ordering(353) 00:14:15.734 fused_ordering(354) 00:14:15.734 fused_ordering(355) 00:14:15.734 fused_ordering(356) 00:14:15.734 fused_ordering(357) 00:14:15.734 fused_ordering(358) 00:14:15.734 fused_ordering(359) 00:14:15.734 fused_ordering(360) 00:14:15.734 fused_ordering(361) 00:14:15.734 fused_ordering(362) 00:14:15.734 fused_ordering(363) 00:14:15.734 fused_ordering(364) 00:14:15.734 fused_ordering(365) 00:14:15.734 fused_ordering(366) 00:14:15.734 fused_ordering(367) 00:14:15.734 fused_ordering(368) 00:14:15.734 fused_ordering(369) 00:14:15.734 fused_ordering(370) 00:14:15.734 fused_ordering(371) 00:14:15.734 fused_ordering(372) 00:14:15.734 fused_ordering(373) 00:14:15.734 fused_ordering(374) 00:14:15.734 fused_ordering(375) 00:14:15.734 fused_ordering(376) 00:14:15.734 fused_ordering(377) 00:14:15.734 fused_ordering(378) 00:14:15.734 fused_ordering(379) 00:14:15.734 fused_ordering(380) 00:14:15.734 fused_ordering(381) 00:14:15.734 fused_ordering(382) 00:14:15.734 fused_ordering(383) 00:14:15.734 fused_ordering(384) 00:14:15.734 fused_ordering(385) 00:14:15.734 fused_ordering(386) 00:14:15.734 fused_ordering(387) 00:14:15.734 fused_ordering(388) 00:14:15.734 fused_ordering(389) 00:14:15.734 fused_ordering(390) 00:14:15.734 fused_ordering(391) 00:14:15.734 fused_ordering(392) 00:14:15.734 fused_ordering(393) 00:14:15.734 fused_ordering(394) 00:14:15.734 fused_ordering(395) 00:14:15.734 fused_ordering(396) 00:14:15.734 fused_ordering(397) 00:14:15.734 fused_ordering(398) 00:14:15.734 fused_ordering(399) 00:14:15.734 fused_ordering(400) 00:14:15.734 fused_ordering(401) 00:14:15.734 fused_ordering(402) 00:14:15.734 fused_ordering(403) 00:14:15.734 fused_ordering(404) 00:14:15.734 fused_ordering(405) 00:14:15.734 fused_ordering(406) 00:14:15.734 fused_ordering(407) 00:14:15.734 fused_ordering(408) 00:14:15.734 fused_ordering(409) 00:14:15.734 fused_ordering(410) 00:14:16.302 fused_ordering(411) 00:14:16.302 fused_ordering(412) 00:14:16.302 fused_ordering(413) 00:14:16.302 fused_ordering(414) 00:14:16.302 fused_ordering(415) 00:14:16.302 fused_ordering(416) 00:14:16.302 fused_ordering(417) 00:14:16.302 fused_ordering(418) 00:14:16.302 fused_ordering(419) 00:14:16.302 fused_ordering(420) 00:14:16.302 fused_ordering(421) 00:14:16.302 fused_ordering(422) 00:14:16.302 fused_ordering(423) 00:14:16.302 fused_ordering(424) 00:14:16.302 fused_ordering(425) 00:14:16.302 fused_ordering(426) 00:14:16.302 fused_ordering(427) 00:14:16.302 fused_ordering(428) 00:14:16.302 fused_ordering(429) 00:14:16.302 fused_ordering(430) 00:14:16.302 fused_ordering(431) 00:14:16.302 fused_ordering(432) 00:14:16.302 fused_ordering(433) 00:14:16.302 fused_ordering(434) 00:14:16.302 fused_ordering(435) 00:14:16.302 fused_ordering(436) 00:14:16.302 fused_ordering(437) 00:14:16.302 fused_ordering(438) 00:14:16.302 fused_ordering(439) 00:14:16.302 fused_ordering(440) 00:14:16.302 fused_ordering(441) 00:14:16.302 fused_ordering(442) 00:14:16.302 fused_ordering(443) 00:14:16.302 fused_ordering(444) 00:14:16.302 fused_ordering(445) 00:14:16.302 fused_ordering(446) 00:14:16.302 fused_ordering(447) 00:14:16.302 fused_ordering(448) 00:14:16.302 fused_ordering(449) 00:14:16.302 fused_ordering(450) 00:14:16.302 fused_ordering(451) 00:14:16.302 fused_ordering(452) 00:14:16.302 fused_ordering(453) 00:14:16.302 fused_ordering(454) 00:14:16.302 fused_ordering(455) 00:14:16.302 fused_ordering(456) 00:14:16.302 fused_ordering(457) 00:14:16.302 fused_ordering(458) 00:14:16.302 fused_ordering(459) 00:14:16.302 fused_ordering(460) 00:14:16.302 fused_ordering(461) 00:14:16.302 fused_ordering(462) 00:14:16.302 fused_ordering(463) 00:14:16.302 fused_ordering(464) 00:14:16.302 fused_ordering(465) 00:14:16.302 fused_ordering(466) 00:14:16.302 fused_ordering(467) 00:14:16.302 fused_ordering(468) 00:14:16.302 fused_ordering(469) 00:14:16.302 fused_ordering(470) 00:14:16.302 fused_ordering(471) 00:14:16.302 fused_ordering(472) 00:14:16.302 fused_ordering(473) 00:14:16.302 fused_ordering(474) 00:14:16.302 fused_ordering(475) 00:14:16.302 fused_ordering(476) 00:14:16.302 fused_ordering(477) 00:14:16.302 fused_ordering(478) 00:14:16.302 fused_ordering(479) 00:14:16.302 fused_ordering(480) 00:14:16.302 fused_ordering(481) 00:14:16.302 fused_ordering(482) 00:14:16.302 fused_ordering(483) 00:14:16.302 fused_ordering(484) 00:14:16.302 fused_ordering(485) 00:14:16.302 fused_ordering(486) 00:14:16.302 fused_ordering(487) 00:14:16.302 fused_ordering(488) 00:14:16.302 fused_ordering(489) 00:14:16.302 fused_ordering(490) 00:14:16.302 fused_ordering(491) 00:14:16.302 fused_ordering(492) 00:14:16.302 fused_ordering(493) 00:14:16.302 fused_ordering(494) 00:14:16.302 fused_ordering(495) 00:14:16.302 fused_ordering(496) 00:14:16.302 fused_ordering(497) 00:14:16.302 fused_ordering(498) 00:14:16.302 fused_ordering(499) 00:14:16.302 fused_ordering(500) 00:14:16.302 fused_ordering(501) 00:14:16.302 fused_ordering(502) 00:14:16.302 fused_ordering(503) 00:14:16.302 fused_ordering(504) 00:14:16.302 fused_ordering(505) 00:14:16.302 fused_ordering(506) 00:14:16.302 fused_ordering(507) 00:14:16.302 fused_ordering(508) 00:14:16.302 fused_ordering(509) 00:14:16.302 fused_ordering(510) 00:14:16.302 fused_ordering(511) 00:14:16.302 fused_ordering(512) 00:14:16.302 fused_ordering(513) 00:14:16.302 fused_ordering(514) 00:14:16.302 fused_ordering(515) 00:14:16.302 fused_ordering(516) 00:14:16.302 fused_ordering(517) 00:14:16.302 fused_ordering(518) 00:14:16.302 fused_ordering(519) 00:14:16.302 fused_ordering(520) 00:14:16.302 fused_ordering(521) 00:14:16.302 fused_ordering(522) 00:14:16.302 fused_ordering(523) 00:14:16.302 fused_ordering(524) 00:14:16.302 fused_ordering(525) 00:14:16.302 fused_ordering(526) 00:14:16.302 fused_ordering(527) 00:14:16.302 fused_ordering(528) 00:14:16.302 fused_ordering(529) 00:14:16.302 fused_ordering(530) 00:14:16.302 fused_ordering(531) 00:14:16.302 fused_ordering(532) 00:14:16.302 fused_ordering(533) 00:14:16.302 fused_ordering(534) 00:14:16.302 fused_ordering(535) 00:14:16.302 fused_ordering(536) 00:14:16.302 fused_ordering(537) 00:14:16.302 fused_ordering(538) 00:14:16.302 fused_ordering(539) 00:14:16.302 fused_ordering(540) 00:14:16.302 fused_ordering(541) 00:14:16.302 fused_ordering(542) 00:14:16.302 fused_ordering(543) 00:14:16.302 fused_ordering(544) 00:14:16.302 fused_ordering(545) 00:14:16.302 fused_ordering(546) 00:14:16.302 fused_ordering(547) 00:14:16.302 fused_ordering(548) 00:14:16.302 fused_ordering(549) 00:14:16.302 fused_ordering(550) 00:14:16.302 fused_ordering(551) 00:14:16.302 fused_ordering(552) 00:14:16.302 fused_ordering(553) 00:14:16.302 fused_ordering(554) 00:14:16.302 fused_ordering(555) 00:14:16.302 fused_ordering(556) 00:14:16.302 fused_ordering(557) 00:14:16.302 fused_ordering(558) 00:14:16.302 fused_ordering(559) 00:14:16.302 fused_ordering(560) 00:14:16.302 fused_ordering(561) 00:14:16.302 fused_ordering(562) 00:14:16.302 fused_ordering(563) 00:14:16.302 fused_ordering(564) 00:14:16.302 fused_ordering(565) 00:14:16.302 fused_ordering(566) 00:14:16.302 fused_ordering(567) 00:14:16.302 fused_ordering(568) 00:14:16.302 fused_ordering(569) 00:14:16.302 fused_ordering(570) 00:14:16.302 fused_ordering(571) 00:14:16.302 fused_ordering(572) 00:14:16.302 fused_ordering(573) 00:14:16.302 fused_ordering(574) 00:14:16.302 fused_ordering(575) 00:14:16.302 fused_ordering(576) 00:14:16.302 fused_ordering(577) 00:14:16.302 fused_ordering(578) 00:14:16.302 fused_ordering(579) 00:14:16.302 fused_ordering(580) 00:14:16.302 fused_ordering(581) 00:14:16.302 fused_ordering(582) 00:14:16.302 fused_ordering(583) 00:14:16.302 fused_ordering(584) 00:14:16.302 fused_ordering(585) 00:14:16.302 fused_ordering(586) 00:14:16.302 fused_ordering(587) 00:14:16.302 fused_ordering(588) 00:14:16.302 fused_ordering(589) 00:14:16.302 fused_ordering(590) 00:14:16.302 fused_ordering(591) 00:14:16.302 fused_ordering(592) 00:14:16.302 fused_ordering(593) 00:14:16.302 fused_ordering(594) 00:14:16.302 fused_ordering(595) 00:14:16.302 fused_ordering(596) 00:14:16.302 fused_ordering(597) 00:14:16.302 fused_ordering(598) 00:14:16.302 fused_ordering(599) 00:14:16.302 fused_ordering(600) 00:14:16.302 fused_ordering(601) 00:14:16.302 fused_ordering(602) 00:14:16.302 fused_ordering(603) 00:14:16.302 fused_ordering(604) 00:14:16.302 fused_ordering(605) 00:14:16.302 fused_ordering(606) 00:14:16.302 fused_ordering(607) 00:14:16.302 fused_ordering(608) 00:14:16.302 fused_ordering(609) 00:14:16.302 fused_ordering(610) 00:14:16.302 fused_ordering(611) 00:14:16.302 fused_ordering(612) 00:14:16.303 fused_ordering(613) 00:14:16.303 fused_ordering(614) 00:14:16.303 fused_ordering(615) 00:14:16.561 fused_ordering(616) 00:14:16.561 fused_ordering(617) 00:14:16.561 fused_ordering(618) 00:14:16.561 fused_ordering(619) 00:14:16.561 fused_ordering(620) 00:14:16.561 fused_ordering(621) 00:14:16.561 fused_ordering(622) 00:14:16.561 fused_ordering(623) 00:14:16.561 fused_ordering(624) 00:14:16.561 fused_ordering(625) 00:14:16.561 fused_ordering(626) 00:14:16.561 fused_ordering(627) 00:14:16.561 fused_ordering(628) 00:14:16.561 fused_ordering(629) 00:14:16.561 fused_ordering(630) 00:14:16.561 fused_ordering(631) 00:14:16.561 fused_ordering(632) 00:14:16.561 fused_ordering(633) 00:14:16.561 fused_ordering(634) 00:14:16.561 fused_ordering(635) 00:14:16.561 fused_ordering(636) 00:14:16.561 fused_ordering(637) 00:14:16.561 fused_ordering(638) 00:14:16.561 fused_ordering(639) 00:14:16.561 fused_ordering(640) 00:14:16.561 fused_ordering(641) 00:14:16.561 fused_ordering(642) 00:14:16.561 fused_ordering(643) 00:14:16.561 fused_ordering(644) 00:14:16.561 fused_ordering(645) 00:14:16.561 fused_ordering(646) 00:14:16.561 fused_ordering(647) 00:14:16.561 fused_ordering(648) 00:14:16.561 fused_ordering(649) 00:14:16.561 fused_ordering(650) 00:14:16.561 fused_ordering(651) 00:14:16.561 fused_ordering(652) 00:14:16.561 fused_ordering(653) 00:14:16.561 fused_ordering(654) 00:14:16.561 fused_ordering(655) 00:14:16.561 fused_ordering(656) 00:14:16.561 fused_ordering(657) 00:14:16.561 fused_ordering(658) 00:14:16.561 fused_ordering(659) 00:14:16.561 fused_ordering(660) 00:14:16.561 fused_ordering(661) 00:14:16.561 fused_ordering(662) 00:14:16.561 fused_ordering(663) 00:14:16.561 fused_ordering(664) 00:14:16.561 fused_ordering(665) 00:14:16.561 fused_ordering(666) 00:14:16.561 fused_ordering(667) 00:14:16.561 fused_ordering(668) 00:14:16.561 fused_ordering(669) 00:14:16.561 fused_ordering(670) 00:14:16.561 fused_ordering(671) 00:14:16.561 fused_ordering(672) 00:14:16.561 fused_ordering(673) 00:14:16.561 fused_ordering(674) 00:14:16.561 fused_ordering(675) 00:14:16.561 fused_ordering(676) 00:14:16.561 fused_ordering(677) 00:14:16.561 fused_ordering(678) 00:14:16.561 fused_ordering(679) 00:14:16.561 fused_ordering(680) 00:14:16.561 fused_ordering(681) 00:14:16.561 fused_ordering(682) 00:14:16.561 fused_ordering(683) 00:14:16.561 fused_ordering(684) 00:14:16.561 fused_ordering(685) 00:14:16.561 fused_ordering(686) 00:14:16.561 fused_ordering(687) 00:14:16.561 fused_ordering(688) 00:14:16.561 fused_ordering(689) 00:14:16.561 fused_ordering(690) 00:14:16.561 fused_ordering(691) 00:14:16.561 fused_ordering(692) 00:14:16.561 fused_ordering(693) 00:14:16.561 fused_ordering(694) 00:14:16.561 fused_ordering(695) 00:14:16.561 fused_ordering(696) 00:14:16.561 fused_ordering(697) 00:14:16.561 fused_ordering(698) 00:14:16.561 fused_ordering(699) 00:14:16.561 fused_ordering(700) 00:14:16.561 fused_ordering(701) 00:14:16.561 fused_ordering(702) 00:14:16.561 fused_ordering(703) 00:14:16.561 fused_ordering(704) 00:14:16.561 fused_ordering(705) 00:14:16.561 fused_ordering(706) 00:14:16.561 fused_ordering(707) 00:14:16.561 fused_ordering(708) 00:14:16.561 fused_ordering(709) 00:14:16.561 fused_ordering(710) 00:14:16.561 fused_ordering(711) 00:14:16.561 fused_ordering(712) 00:14:16.561 fused_ordering(713) 00:14:16.561 fused_ordering(714) 00:14:16.561 fused_ordering(715) 00:14:16.561 fused_ordering(716) 00:14:16.561 fused_ordering(717) 00:14:16.561 fused_ordering(718) 00:14:16.561 fused_ordering(719) 00:14:16.561 fused_ordering(720) 00:14:16.561 fused_ordering(721) 00:14:16.561 fused_ordering(722) 00:14:16.561 fused_ordering(723) 00:14:16.561 fused_ordering(724) 00:14:16.561 fused_ordering(725) 00:14:16.561 fused_ordering(726) 00:14:16.561 fused_ordering(727) 00:14:16.561 fused_ordering(728) 00:14:16.561 fused_ordering(729) 00:14:16.561 fused_ordering(730) 00:14:16.561 fused_ordering(731) 00:14:16.561 fused_ordering(732) 00:14:16.561 fused_ordering(733) 00:14:16.561 fused_ordering(734) 00:14:16.561 fused_ordering(735) 00:14:16.561 fused_ordering(736) 00:14:16.561 fused_ordering(737) 00:14:16.561 fused_ordering(738) 00:14:16.561 fused_ordering(739) 00:14:16.561 fused_ordering(740) 00:14:16.561 fused_ordering(741) 00:14:16.561 fused_ordering(742) 00:14:16.561 fused_ordering(743) 00:14:16.561 fused_ordering(744) 00:14:16.561 fused_ordering(745) 00:14:16.561 fused_ordering(746) 00:14:16.561 fused_ordering(747) 00:14:16.561 fused_ordering(748) 00:14:16.561 fused_ordering(749) 00:14:16.561 fused_ordering(750) 00:14:16.561 fused_ordering(751) 00:14:16.561 fused_ordering(752) 00:14:16.561 fused_ordering(753) 00:14:16.561 fused_ordering(754) 00:14:16.561 fused_ordering(755) 00:14:16.561 fused_ordering(756) 00:14:16.562 fused_ordering(757) 00:14:16.562 fused_ordering(758) 00:14:16.562 fused_ordering(759) 00:14:16.562 fused_ordering(760) 00:14:16.562 fused_ordering(761) 00:14:16.562 fused_ordering(762) 00:14:16.562 fused_ordering(763) 00:14:16.562 fused_ordering(764) 00:14:16.562 fused_ordering(765) 00:14:16.562 fused_ordering(766) 00:14:16.562 fused_ordering(767) 00:14:16.562 fused_ordering(768) 00:14:16.562 fused_ordering(769) 00:14:16.562 fused_ordering(770) 00:14:16.562 fused_ordering(771) 00:14:16.562 fused_ordering(772) 00:14:16.562 fused_ordering(773) 00:14:16.562 fused_ordering(774) 00:14:16.562 fused_ordering(775) 00:14:16.562 fused_ordering(776) 00:14:16.562 fused_ordering(777) 00:14:16.562 fused_ordering(778) 00:14:16.562 fused_ordering(779) 00:14:16.562 fused_ordering(780) 00:14:16.562 fused_ordering(781) 00:14:16.562 fused_ordering(782) 00:14:16.562 fused_ordering(783) 00:14:16.562 fused_ordering(784) 00:14:16.562 fused_ordering(785) 00:14:16.562 fused_ordering(786) 00:14:16.562 fused_ordering(787) 00:14:16.562 fused_ordering(788) 00:14:16.562 fused_ordering(789) 00:14:16.562 fused_ordering(790) 00:14:16.562 fused_ordering(791) 00:14:16.562 fused_ordering(792) 00:14:16.562 fused_ordering(793) 00:14:16.562 fused_ordering(794) 00:14:16.562 fused_ordering(795) 00:14:16.562 fused_ordering(796) 00:14:16.562 fused_ordering(797) 00:14:16.562 fused_ordering(798) 00:14:16.562 fused_ordering(799) 00:14:16.562 fused_ordering(800) 00:14:16.562 fused_ordering(801) 00:14:16.562 fused_ordering(802) 00:14:16.562 fused_ordering(803) 00:14:16.562 fused_ordering(804) 00:14:16.562 fused_ordering(805) 00:14:16.562 fused_ordering(806) 00:14:16.562 fused_ordering(807) 00:14:16.562 fused_ordering(808) 00:14:16.562 fused_ordering(809) 00:14:16.562 fused_ordering(810) 00:14:16.562 fused_ordering(811) 00:14:16.562 fused_ordering(812) 00:14:16.562 fused_ordering(813) 00:14:16.562 fused_ordering(814) 00:14:16.562 fused_ordering(815) 00:14:16.562 fused_ordering(816) 00:14:16.562 fused_ordering(817) 00:14:16.562 fused_ordering(818) 00:14:16.562 fused_ordering(819) 00:14:16.562 fused_ordering(820) 00:14:17.129 fused_ordering(821) 00:14:17.129 fused_ordering(822) 00:14:17.129 fused_ordering(823) 00:14:17.129 fused_ordering(824) 00:14:17.129 fused_ordering(825) 00:14:17.129 fused_ordering(826) 00:14:17.129 fused_ordering(827) 00:14:17.129 fused_ordering(828) 00:14:17.129 fused_ordering(829) 00:14:17.129 fused_ordering(830) 00:14:17.129 fused_ordering(831) 00:14:17.129 fused_ordering(832) 00:14:17.129 fused_ordering(833) 00:14:17.129 fused_ordering(834) 00:14:17.129 fused_ordering(835) 00:14:17.129 fused_ordering(836) 00:14:17.129 fused_ordering(837) 00:14:17.129 fused_ordering(838) 00:14:17.129 fused_ordering(839) 00:14:17.129 fused_ordering(840) 00:14:17.129 fused_ordering(841) 00:14:17.129 fused_ordering(842) 00:14:17.129 fused_ordering(843) 00:14:17.129 fused_ordering(844) 00:14:17.129 fused_ordering(845) 00:14:17.129 fused_ordering(846) 00:14:17.129 fused_ordering(847) 00:14:17.129 fused_ordering(848) 00:14:17.129 fused_ordering(849) 00:14:17.129 fused_ordering(850) 00:14:17.129 fused_ordering(851) 00:14:17.129 fused_ordering(852) 00:14:17.129 fused_ordering(853) 00:14:17.129 fused_ordering(854) 00:14:17.129 fused_ordering(855) 00:14:17.129 fused_ordering(856) 00:14:17.129 fused_ordering(857) 00:14:17.129 fused_ordering(858) 00:14:17.129 fused_ordering(859) 00:14:17.129 fused_ordering(860) 00:14:17.129 fused_ordering(861) 00:14:17.129 fused_ordering(862) 00:14:17.129 fused_ordering(863) 00:14:17.129 fused_ordering(864) 00:14:17.129 fused_ordering(865) 00:14:17.129 fused_ordering(866) 00:14:17.129 fused_ordering(867) 00:14:17.129 fused_ordering(868) 00:14:17.129 fused_ordering(869) 00:14:17.129 fused_ordering(870) 00:14:17.129 fused_ordering(871) 00:14:17.129 fused_ordering(872) 00:14:17.129 fused_ordering(873) 00:14:17.129 fused_ordering(874) 00:14:17.129 fused_ordering(875) 00:14:17.129 fused_ordering(876) 00:14:17.129 fused_ordering(877) 00:14:17.129 fused_ordering(878) 00:14:17.129 fused_ordering(879) 00:14:17.129 fused_ordering(880) 00:14:17.129 fused_ordering(881) 00:14:17.129 fused_ordering(882) 00:14:17.129 fused_ordering(883) 00:14:17.129 fused_ordering(884) 00:14:17.129 fused_ordering(885) 00:14:17.129 fused_ordering(886) 00:14:17.129 fused_ordering(887) 00:14:17.129 fused_ordering(888) 00:14:17.129 fused_ordering(889) 00:14:17.129 fused_ordering(890) 00:14:17.129 fused_ordering(891) 00:14:17.129 fused_ordering(892) 00:14:17.129 fused_ordering(893) 00:14:17.129 fused_ordering(894) 00:14:17.129 fused_ordering(895) 00:14:17.129 fused_ordering(896) 00:14:17.129 fused_ordering(897) 00:14:17.129 fused_ordering(898) 00:14:17.129 fused_ordering(899) 00:14:17.129 fused_ordering(900) 00:14:17.129 fused_ordering(901) 00:14:17.129 fused_ordering(902) 00:14:17.129 fused_ordering(903) 00:14:17.129 fused_ordering(904) 00:14:17.129 fused_ordering(905) 00:14:17.129 fused_ordering(906) 00:14:17.129 fused_ordering(907) 00:14:17.129 fused_ordering(908) 00:14:17.129 fused_ordering(909) 00:14:17.129 fused_ordering(910) 00:14:17.129 fused_ordering(911) 00:14:17.129 fused_ordering(912) 00:14:17.129 fused_ordering(913) 00:14:17.129 fused_ordering(914) 00:14:17.129 fused_ordering(915) 00:14:17.129 fused_ordering(916) 00:14:17.129 fused_ordering(917) 00:14:17.129 fused_ordering(918) 00:14:17.129 fused_ordering(919) 00:14:17.129 fused_ordering(920) 00:14:17.129 fused_ordering(921) 00:14:17.129 fused_ordering(922) 00:14:17.129 fused_ordering(923) 00:14:17.129 fused_ordering(924) 00:14:17.129 fused_ordering(925) 00:14:17.129 fused_ordering(926) 00:14:17.129 fused_ordering(927) 00:14:17.129 fused_ordering(928) 00:14:17.129 fused_ordering(929) 00:14:17.129 fused_ordering(930) 00:14:17.129 fused_ordering(931) 00:14:17.129 fused_ordering(932) 00:14:17.129 fused_ordering(933) 00:14:17.129 fused_ordering(934) 00:14:17.129 fused_ordering(935) 00:14:17.129 fused_ordering(936) 00:14:17.129 fused_ordering(937) 00:14:17.129 fused_ordering(938) 00:14:17.129 fused_ordering(939) 00:14:17.129 fused_ordering(940) 00:14:17.129 fused_ordering(941) 00:14:17.129 fused_ordering(942) 00:14:17.129 fused_ordering(943) 00:14:17.129 fused_ordering(944) 00:14:17.129 fused_ordering(945) 00:14:17.129 fused_ordering(946) 00:14:17.129 fused_ordering(947) 00:14:17.129 fused_ordering(948) 00:14:17.129 fused_ordering(949) 00:14:17.129 fused_ordering(950) 00:14:17.129 fused_ordering(951) 00:14:17.129 fused_ordering(952) 00:14:17.129 fused_ordering(953) 00:14:17.129 fused_ordering(954) 00:14:17.129 fused_ordering(955) 00:14:17.129 fused_ordering(956) 00:14:17.129 fused_ordering(957) 00:14:17.129 fused_ordering(958) 00:14:17.129 fused_ordering(959) 00:14:17.129 fused_ordering(960) 00:14:17.129 fused_ordering(961) 00:14:17.129 fused_ordering(962) 00:14:17.129 fused_ordering(963) 00:14:17.129 fused_ordering(964) 00:14:17.129 fused_ordering(965) 00:14:17.129 fused_ordering(966) 00:14:17.129 fused_ordering(967) 00:14:17.129 fused_ordering(968) 00:14:17.129 fused_ordering(969) 00:14:17.129 fused_ordering(970) 00:14:17.129 fused_ordering(971) 00:14:17.129 fused_ordering(972) 00:14:17.129 fused_ordering(973) 00:14:17.129 fused_ordering(974) 00:14:17.129 fused_ordering(975) 00:14:17.129 fused_ordering(976) 00:14:17.129 fused_ordering(977) 00:14:17.129 fused_ordering(978) 00:14:17.129 fused_ordering(979) 00:14:17.129 fused_ordering(980) 00:14:17.129 fused_ordering(981) 00:14:17.129 fused_ordering(982) 00:14:17.129 fused_ordering(983) 00:14:17.129 fused_ordering(984) 00:14:17.129 fused_ordering(985) 00:14:17.129 fused_ordering(986) 00:14:17.129 fused_ordering(987) 00:14:17.129 fused_ordering(988) 00:14:17.129 fused_ordering(989) 00:14:17.129 fused_ordering(990) 00:14:17.129 fused_ordering(991) 00:14:17.129 fused_ordering(992) 00:14:17.129 fused_ordering(993) 00:14:17.129 fused_ordering(994) 00:14:17.129 fused_ordering(995) 00:14:17.129 fused_ordering(996) 00:14:17.129 fused_ordering(997) 00:14:17.129 fused_ordering(998) 00:14:17.129 fused_ordering(999) 00:14:17.129 fused_ordering(1000) 00:14:17.129 fused_ordering(1001) 00:14:17.129 fused_ordering(1002) 00:14:17.129 fused_ordering(1003) 00:14:17.129 fused_ordering(1004) 00:14:17.129 fused_ordering(1005) 00:14:17.129 fused_ordering(1006) 00:14:17.129 fused_ordering(1007) 00:14:17.129 fused_ordering(1008) 00:14:17.129 fused_ordering(1009) 00:14:17.129 fused_ordering(1010) 00:14:17.130 fused_ordering(1011) 00:14:17.130 fused_ordering(1012) 00:14:17.130 fused_ordering(1013) 00:14:17.130 fused_ordering(1014) 00:14:17.130 fused_ordering(1015) 00:14:17.130 fused_ordering(1016) 00:14:17.130 fused_ordering(1017) 00:14:17.130 fused_ordering(1018) 00:14:17.130 fused_ordering(1019) 00:14:17.130 fused_ordering(1020) 00:14:17.130 fused_ordering(1021) 00:14:17.130 fused_ordering(1022) 00:14:17.130 fused_ordering(1023) 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.130 rmmod nvme_tcp 00:14:17.130 rmmod nvme_fabrics 00:14:17.130 rmmod nvme_keyring 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 495185 ']' 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 495185 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 495185 ']' 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 495185 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 495185 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 495185' 00:14:17.130 killing process with pid 495185 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 495185 00:14:17.130 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 495185 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.388 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:19.293 00:14:19.293 real 0m10.777s 00:14:19.293 user 0m5.105s 00:14:19.293 sys 0m5.844s 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:19.293 ************************************ 00:14:19.293 END TEST nvmf_fused_ordering 00:14:19.293 ************************************ 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.293 ************************************ 00:14:19.293 START TEST nvmf_ns_masking 00:14:19.293 ************************************ 00:14:19.293 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:19.553 * Looking for test storage... 00:14:19.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.553 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:19.553 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:19.553 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:19.553 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.554 --rc genhtml_branch_coverage=1 00:14:19.554 --rc genhtml_function_coverage=1 00:14:19.554 --rc genhtml_legend=1 00:14:19.554 --rc geninfo_all_blocks=1 00:14:19.554 --rc geninfo_unexecuted_blocks=1 00:14:19.554 00:14:19.554 ' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.554 --rc genhtml_branch_coverage=1 00:14:19.554 --rc genhtml_function_coverage=1 00:14:19.554 --rc genhtml_legend=1 00:14:19.554 --rc geninfo_all_blocks=1 00:14:19.554 --rc geninfo_unexecuted_blocks=1 00:14:19.554 00:14:19.554 ' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.554 --rc genhtml_branch_coverage=1 00:14:19.554 --rc genhtml_function_coverage=1 00:14:19.554 --rc genhtml_legend=1 00:14:19.554 --rc geninfo_all_blocks=1 00:14:19.554 --rc geninfo_unexecuted_blocks=1 00:14:19.554 00:14:19.554 ' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:19.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:19.554 --rc genhtml_branch_coverage=1 00:14:19.554 --rc genhtml_function_coverage=1 00:14:19.554 --rc genhtml_legend=1 00:14:19.554 --rc geninfo_all_blocks=1 00:14:19.554 --rc geninfo_unexecuted_blocks=1 00:14:19.554 00:14:19.554 ' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:19.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7ef8e02d-2f71-41fe-b6fc-051c9608dbf5 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6433dfe7-a147-4f13-9cc7-3d438b74eb2e 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=05c14382-66df-4f04-8d21-a0bca052b16e 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:19.554 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.135 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:26.136 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:26.136 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:26.136 Found net devices under 0000:86:00.0: cvl_0_0 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:26.136 Found net devices under 0000:86:00.1: cvl_0_1 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.136 16:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:14:26.136 00:14:26.136 --- 10.0.0.2 ping statistics --- 00:14:26.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.136 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:26.136 00:14:26.136 --- 10.0.0.1 ping statistics --- 00:14:26.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.136 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=499058 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:26.136 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 499058 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 499058 ']' 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.137 [2024-10-14 16:39:30.152136] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:26.137 [2024-10-14 16:39:30.152182] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.137 [2024-10-14 16:39:30.223875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.137 [2024-10-14 16:39:30.264866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.137 [2024-10-14 16:39:30.264899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.137 [2024-10-14 16:39:30.264906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.137 [2024-10-14 16:39:30.264912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.137 [2024-10-14 16:39:30.264917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.137 [2024-10-14 16:39:30.265449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:26.137 [2024-10-14 16:39:30.563877] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:26.137 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:26.137 Malloc1 00:14:26.395 16:39:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:26.395 Malloc2 00:14:26.395 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:26.654 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:26.912 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.912 [2024-10-14 16:39:31.537419] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05c14382-66df-4f04-8d21-a0bca052b16e -a 10.0.0.2 -s 4420 -i 4 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.171 16:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:29.074 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.333 [ 0]:0x1 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b053c8332c894bb98d59fd110545fd5f 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b053c8332c894bb98d59fd110545fd5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.333 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:29.592 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:29.592 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.592 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.592 [ 0]:0x1 00:14:29.592 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.592 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b053c8332c894bb98d59fd110545fd5f 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b053c8332c894bb98d59fd110545fd5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.592 [ 1]:0x2 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.592 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.851 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:30.113 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:30.113 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05c14382-66df-4f04-8d21-a0bca052b16e -a 10.0.0.2 -s 4420 -i 4 00:14:30.379 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:30.379 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:30.379 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.379 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:30.379 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:30.379 16:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:32.333 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.334 [ 0]:0x2 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:32.334 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.592 16:39:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:32.592 [ 0]:0x1 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b053c8332c894bb98d59fd110545fd5f 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b053c8332c894bb98d59fd110545fd5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.592 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:32.850 [ 1]:0x2 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:32.850 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:33.109 [ 0]:0x2 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.109 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:33.367 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:33.367 16:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 05c14382-66df-4f04-8d21-a0bca052b16e -a 10.0.0.2 -s 4420 -i 4 00:14:33.625 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:33.625 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:33.625 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:33.625 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:33.625 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:33.625 16:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:35.525 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:35.784 [ 0]:0x1 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b053c8332c894bb98d59fd110545fd5f 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b053c8332c894bb98d59fd110545fd5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:35.784 [ 1]:0x2 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:35.784 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.043 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:36.043 [ 0]:0x2 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:36.302 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:36.302 [2024-10-14 16:39:40.895842] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:36.302 request: 00:14:36.302 { 00:14:36.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.302 "nsid": 2, 00:14:36.302 "host": "nqn.2016-06.io.spdk:host1", 00:14:36.302 "method": "nvmf_ns_remove_host", 00:14:36.302 "req_id": 1 00:14:36.302 } 00:14:36.302 Got JSON-RPC error response 00:14:36.303 response: 00:14:36.303 { 00:14:36.303 "code": -32602, 00:14:36.303 "message": "Invalid parameters" 00:14:36.303 } 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.303 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:36.562 [ 0]:0x2 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:36.562 16:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=692700c9a4474f5cb198528508ddcf46 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 692700c9a4474f5cb198528508ddcf46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=500976 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 500976 /var/tmp/host.sock 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 500976 ']' 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:36.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.562 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:36.562 [2024-10-14 16:39:41.129270] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:36.562 [2024-10-14 16:39:41.129320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500976 ] 00:14:36.822 [2024-10-14 16:39:41.198358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.822 [2024-10-14 16:39:41.238802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.080 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.080 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:37.080 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.080 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:37.338 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7ef8e02d-2f71-41fe-b6fc-051c9608dbf5 00:14:37.338 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:37.338 16:39:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7EF8E02D2F7141FEB6FC051C9608DBF5 -i 00:14:37.596 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6433dfe7-a147-4f13-9cc7-3d438b74eb2e 00:14:37.596 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:37.596 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6433DFE7A1474F139CC73D438B74EB2E -i 00:14:37.855 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:37.855 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:38.113 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:38.113 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:38.372 nvme0n1 00:14:38.372 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:38.372 16:39:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:38.630 nvme1n2 00:14:38.630 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:38.630 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:38.630 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:38.630 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:38.630 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:38.889 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:38.889 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:38.889 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:38.889 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:39.148 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7ef8e02d-2f71-41fe-b6fc-051c9608dbf5 == \7\e\f\8\e\0\2\d\-\2\f\7\1\-\4\1\f\e\-\b\6\f\c\-\0\5\1\c\9\6\0\8\d\b\f\5 ]] 00:14:39.148 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:39.148 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:39.148 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6433dfe7-a147-4f13-9cc7-3d438b74eb2e == \6\4\3\3\d\f\e\7\-\a\1\4\7\-\4\f\1\3\-\9\c\c\7\-\3\d\4\3\8\b\7\4\e\b\2\e ]] 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 500976 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 500976 ']' 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 500976 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 500976 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 500976' 00:14:39.406 killing process with pid 500976 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 500976 00:14:39.406 16:39:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 500976 00:14:39.665 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.924 rmmod nvme_tcp 00:14:39.924 rmmod nvme_fabrics 00:14:39.924 rmmod nvme_keyring 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 499058 ']' 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 499058 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 499058 ']' 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 499058 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 499058 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 499058' 00:14:39.924 killing process with pid 499058 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 499058 00:14:39.924 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 499058 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.193 16:39:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.729 00:14:42.729 real 0m22.837s 00:14:42.729 user 0m24.188s 00:14:42.729 sys 0m6.680s 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:42.729 ************************************ 00:14:42.729 END TEST nvmf_ns_masking 00:14:42.729 ************************************ 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.729 ************************************ 00:14:42.729 START TEST nvmf_nvme_cli 00:14:42.729 ************************************ 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:42.729 * Looking for test storage... 00:14:42.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.729 --rc genhtml_branch_coverage=1 00:14:42.729 --rc genhtml_function_coverage=1 00:14:42.729 --rc genhtml_legend=1 00:14:42.729 --rc geninfo_all_blocks=1 00:14:42.729 --rc geninfo_unexecuted_blocks=1 00:14:42.729 00:14:42.729 ' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.729 --rc genhtml_branch_coverage=1 00:14:42.729 --rc genhtml_function_coverage=1 00:14:42.729 --rc genhtml_legend=1 00:14:42.729 --rc geninfo_all_blocks=1 00:14:42.729 --rc geninfo_unexecuted_blocks=1 00:14:42.729 00:14:42.729 ' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.729 --rc genhtml_branch_coverage=1 00:14:42.729 --rc genhtml_function_coverage=1 00:14:42.729 --rc genhtml_legend=1 00:14:42.729 --rc geninfo_all_blocks=1 00:14:42.729 --rc geninfo_unexecuted_blocks=1 00:14:42.729 00:14:42.729 ' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:42.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.729 --rc genhtml_branch_coverage=1 00:14:42.729 --rc genhtml_function_coverage=1 00:14:42.729 --rc genhtml_legend=1 00:14:42.729 --rc geninfo_all_blocks=1 00:14:42.729 --rc geninfo_unexecuted_blocks=1 00:14:42.729 00:14:42.729 ' 00:14:42.729 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.730 16:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:42.730 16:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:49.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:49.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:49.299 Found net devices under 0000:86:00.0: cvl_0_0 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:49.299 Found net devices under 0000:86:00.1: cvl_0_1 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.299 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:14:49.300 00:14:49.300 --- 10.0.0.2 ping statistics --- 00:14:49.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.300 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:14:49.300 00:14:49.300 --- 10.0.0.1 ping statistics --- 00:14:49.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.300 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:49.300 16:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=505213 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 505213 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 505213 ']' 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 [2024-10-14 16:39:53.091017] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:49.300 [2024-10-14 16:39:53.091067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.300 [2024-10-14 16:39:53.164473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.300 [2024-10-14 16:39:53.206214] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.300 [2024-10-14 16:39:53.206253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.300 [2024-10-14 16:39:53.206259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.300 [2024-10-14 16:39:53.206267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.300 [2024-10-14 16:39:53.206272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.300 [2024-10-14 16:39:53.207882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.300 [2024-10-14 16:39:53.207986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.300 [2024-10-14 16:39:53.208090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.300 [2024-10-14 16:39:53.208090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 [2024-10-14 16:39:53.352679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 Malloc0 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 Malloc1 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 [2024-10-14 16:39:53.441392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:49.300 00:14:49.300 Discovery Log Number of Records 2, Generation counter 2 00:14:49.300 =====Discovery Log Entry 0====== 00:14:49.300 trtype: tcp 00:14:49.300 adrfam: ipv4 00:14:49.300 subtype: current discovery subsystem 00:14:49.300 treq: not required 00:14:49.300 portid: 0 00:14:49.300 trsvcid: 4420 00:14:49.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:49.300 traddr: 10.0.0.2 00:14:49.300 eflags: explicit discovery connections, duplicate discovery information 00:14:49.300 sectype: none 00:14:49.300 =====Discovery Log Entry 1====== 00:14:49.300 trtype: tcp 00:14:49.300 adrfam: ipv4 00:14:49.300 subtype: nvme subsystem 00:14:49.300 treq: not required 00:14:49.300 portid: 0 00:14:49.300 trsvcid: 4420 00:14:49.300 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:49.300 traddr: 10.0.0.2 00:14:49.300 eflags: none 00:14:49.300 sectype: none 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:49.300 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:49.301 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:49.301 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:49.301 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:49.301 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:49.301 16:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.235 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:50.235 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:50.235 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.235 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:50.235 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:50.235 16:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:52.765 /dev/nvme0n2 ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:52.765 16:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.765 rmmod nvme_tcp 00:14:52.765 rmmod nvme_fabrics 00:14:52.765 rmmod nvme_keyring 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 505213 ']' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 505213 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 505213 ']' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 505213 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 505213 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 505213' 00:14:52.765 killing process with pid 505213 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 505213 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 505213 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.765 16:39:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:55.297 00:14:55.297 real 0m12.609s 00:14:55.297 user 0m18.225s 00:14:55.297 sys 0m5.150s 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.297 ************************************ 00:14:55.297 END TEST nvmf_nvme_cli 00:14:55.297 ************************************ 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:55.297 ************************************ 00:14:55.297 START TEST nvmf_vfio_user 00:14:55.297 ************************************ 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:55.297 * Looking for test storage... 00:14:55.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.297 --rc genhtml_branch_coverage=1 00:14:55.297 --rc genhtml_function_coverage=1 00:14:55.297 --rc genhtml_legend=1 00:14:55.297 --rc geninfo_all_blocks=1 00:14:55.297 --rc geninfo_unexecuted_blocks=1 00:14:55.297 00:14:55.297 ' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.297 --rc genhtml_branch_coverage=1 00:14:55.297 --rc genhtml_function_coverage=1 00:14:55.297 --rc genhtml_legend=1 00:14:55.297 --rc geninfo_all_blocks=1 00:14:55.297 --rc geninfo_unexecuted_blocks=1 00:14:55.297 00:14:55.297 ' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.297 --rc genhtml_branch_coverage=1 00:14:55.297 --rc genhtml_function_coverage=1 00:14:55.297 --rc genhtml_legend=1 00:14:55.297 --rc geninfo_all_blocks=1 00:14:55.297 --rc geninfo_unexecuted_blocks=1 00:14:55.297 00:14:55.297 ' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:55.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:55.297 --rc genhtml_branch_coverage=1 00:14:55.297 --rc genhtml_function_coverage=1 00:14:55.297 --rc genhtml_legend=1 00:14:55.297 --rc geninfo_all_blocks=1 00:14:55.297 --rc geninfo_unexecuted_blocks=1 00:14:55.297 00:14:55.297 ' 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:55.297 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:55.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=506291 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 506291' 00:14:55.298 Process pid: 506291 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 506291 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 506291 ']' 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.298 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:55.298 [2024-10-14 16:39:59.770405] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:55.298 [2024-10-14 16:39:59.770454] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.298 [2024-10-14 16:39:59.839352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:55.298 [2024-10-14 16:39:59.881428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.298 [2024-10-14 16:39:59.881466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.298 [2024-10-14 16:39:59.881480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.298 [2024-10-14 16:39:59.881489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.298 [2024-10-14 16:39:59.881495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.298 [2024-10-14 16:39:59.886623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.298 [2024-10-14 16:39:59.886662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.298 [2024-10-14 16:39:59.886769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.298 [2024-10-14 16:39:59.886770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.557 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.557 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:55.557 16:39:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:56.498 16:40:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:56.760 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:56.760 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:56.760 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.760 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:56.760 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:57.017 Malloc1 00:14:57.017 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:57.017 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:57.274 16:40:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:57.532 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:57.532 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:57.532 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:57.790 Malloc2 00:14:57.790 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:58.047 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:58.047 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:58.306 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:58.306 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:58.306 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.306 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:58.306 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:58.306 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:58.306 [2024-10-14 16:40:02.879019] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:14:58.306 [2024-10-14 16:40:02.879052] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid506955 ] 00:14:58.306 [2024-10-14 16:40:02.907899] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:58.306 [2024-10-14 16:40:02.915891] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:58.306 [2024-10-14 16:40:02.915910] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9419fdb000 00:14:58.306 [2024-10-14 16:40:02.916883] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.917885] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.918886] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.919893] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.920900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.921902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.922907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.923916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:58.306 [2024-10-14 16:40:02.924926] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:58.306 [2024-10-14 16:40:02.924939] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9419fd0000 00:14:58.306 [2024-10-14 16:40:02.925855] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:58.306 [2024-10-14 16:40:02.935296] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:58.306 [2024-10-14 16:40:02.935323] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:58.306 [2024-10-14 16:40:02.940017] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:58.306 [2024-10-14 16:40:02.940051] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:58.306 [2024-10-14 16:40:02.940124] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:58.306 [2024-10-14 16:40:02.940142] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:58.306 [2024-10-14 16:40:02.940151] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:58.306 [2024-10-14 16:40:02.941011] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:58.306 [2024-10-14 16:40:02.941020] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:58.306 [2024-10-14 16:40:02.941026] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:58.306 [2024-10-14 16:40:02.942015] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:58.306 [2024-10-14 16:40:02.942023] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:58.306 [2024-10-14 16:40:02.942029] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:58.567 [2024-10-14 16:40:02.943024] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:58.567 [2024-10-14 16:40:02.943033] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:58.567 [2024-10-14 16:40:02.944030] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:58.567 [2024-10-14 16:40:02.944038] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:58.567 [2024-10-14 16:40:02.944043] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:58.567 [2024-10-14 16:40:02.944048] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:58.567 [2024-10-14 16:40:02.944155] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:58.567 [2024-10-14 16:40:02.944159] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:58.567 [2024-10-14 16:40:02.944164] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:58.567 [2024-10-14 16:40:02.945037] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:58.567 [2024-10-14 16:40:02.946041] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:58.567 [2024-10-14 16:40:02.947046] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:58.567 [2024-10-14 16:40:02.948047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:58.567 [2024-10-14 16:40:02.948128] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:58.567 [2024-10-14 16:40:02.949064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:58.567 [2024-10-14 16:40:02.949079] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:58.567 [2024-10-14 16:40:02.949084] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949102] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:58.567 [2024-10-14 16:40:02.949112] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949127] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.567 [2024-10-14 16:40:02.949132] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.567 [2024-10-14 16:40:02.949135] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.567 [2024-10-14 16:40:02.949148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.567 [2024-10-14 16:40:02.949205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:58.567 [2024-10-14 16:40:02.949214] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:58.567 [2024-10-14 16:40:02.949219] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:58.567 [2024-10-14 16:40:02.949223] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:58.567 [2024-10-14 16:40:02.949227] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:58.567 [2024-10-14 16:40:02.949231] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:58.567 [2024-10-14 16:40:02.949235] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:58.567 [2024-10-14 16:40:02.949240] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949247] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:58.567 [2024-10-14 16:40:02.949273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:58.567 [2024-10-14 16:40:02.949283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.567 [2024-10-14 16:40:02.949290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.567 [2024-10-14 16:40:02.949298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.567 [2024-10-14 16:40:02.949305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:58.567 [2024-10-14 16:40:02.949309] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949316] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:58.567 [2024-10-14 16:40:02.949330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:58.567 [2024-10-14 16:40:02.949335] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:58.567 [2024-10-14 16:40:02.949343] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949351] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949357] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:58.567 [2024-10-14 16:40:02.949372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:58.567 [2024-10-14 16:40:02.949422] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949429] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949435] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:58.567 [2024-10-14 16:40:02.949440] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:58.567 [2024-10-14 16:40:02.949442] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.567 [2024-10-14 16:40:02.949448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:58.567 [2024-10-14 16:40:02.949459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:58.567 [2024-10-14 16:40:02.949470] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:58.567 [2024-10-14 16:40:02.949481] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949488] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:58.567 [2024-10-14 16:40:02.949494] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.567 [2024-10-14 16:40:02.949498] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.567 [2024-10-14 16:40:02.949501] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.567 [2024-10-14 16:40:02.949506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949540] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949548] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949553] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:58.568 [2024-10-14 16:40:02.949557] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.568 [2024-10-14 16:40:02.949560] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.568 [2024-10-14 16:40:02.949566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949589] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949595] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949606] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949612] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949617] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949621] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949626] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:58.568 [2024-10-14 16:40:02.949630] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:58.568 [2024-10-14 16:40:02.949634] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:58.568 [2024-10-14 16:40:02.949650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949725] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:58.568 [2024-10-14 16:40:02.949729] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:58.568 [2024-10-14 16:40:02.949732] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:58.568 [2024-10-14 16:40:02.949735] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:58.568 [2024-10-14 16:40:02.949738] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:58.568 [2024-10-14 16:40:02.949743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:58.568 [2024-10-14 16:40:02.949750] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:58.568 [2024-10-14 16:40:02.949754] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:58.568 [2024-10-14 16:40:02.949757] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.568 [2024-10-14 16:40:02.949764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949770] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:58.568 [2024-10-14 16:40:02.949774] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:58.568 [2024-10-14 16:40:02.949777] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.568 [2024-10-14 16:40:02.949782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949788] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:58.568 [2024-10-14 16:40:02.949792] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:58.568 [2024-10-14 16:40:02.949795] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:58.568 [2024-10-14 16:40:02.949800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:58.568 [2024-10-14 16:40:02.949807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:58.568 [2024-10-14 16:40:02.949832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:58.568 ===================================================== 00:14:58.568 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:58.568 ===================================================== 00:14:58.568 Controller Capabilities/Features 00:14:58.568 ================================ 00:14:58.568 Vendor ID: 4e58 00:14:58.568 Subsystem Vendor ID: 4e58 00:14:58.568 Serial Number: SPDK1 00:14:58.568 Model Number: SPDK bdev Controller 00:14:58.568 Firmware Version: 25.01 00:14:58.568 Recommended Arb Burst: 6 00:14:58.568 IEEE OUI Identifier: 8d 6b 50 00:14:58.568 Multi-path I/O 00:14:58.568 May have multiple subsystem ports: Yes 00:14:58.568 May have multiple controllers: Yes 00:14:58.568 Associated with SR-IOV VF: No 00:14:58.568 Max Data Transfer Size: 131072 00:14:58.568 Max Number of Namespaces: 32 00:14:58.568 Max Number of I/O Queues: 127 00:14:58.568 NVMe Specification Version (VS): 1.3 00:14:58.568 NVMe Specification Version (Identify): 1.3 00:14:58.568 Maximum Queue Entries: 256 00:14:58.568 Contiguous Queues Required: Yes 00:14:58.568 Arbitration Mechanisms Supported 00:14:58.568 Weighted Round Robin: Not Supported 00:14:58.568 Vendor Specific: Not Supported 00:14:58.568 Reset Timeout: 15000 ms 00:14:58.568 Doorbell Stride: 4 bytes 00:14:58.568 NVM Subsystem Reset: Not Supported 00:14:58.568 Command Sets Supported 00:14:58.568 NVM Command Set: Supported 00:14:58.568 Boot Partition: Not Supported 00:14:58.568 Memory Page Size Minimum: 4096 bytes 00:14:58.568 Memory Page Size Maximum: 4096 bytes 00:14:58.568 Persistent Memory Region: Not Supported 00:14:58.568 Optional Asynchronous Events Supported 00:14:58.568 Namespace Attribute Notices: Supported 00:14:58.568 Firmware Activation Notices: Not Supported 00:14:58.568 ANA Change Notices: Not Supported 00:14:58.568 PLE Aggregate Log Change Notices: Not Supported 00:14:58.568 LBA Status Info Alert Notices: Not Supported 00:14:58.568 EGE Aggregate Log Change Notices: Not Supported 00:14:58.568 Normal NVM Subsystem Shutdown event: Not Supported 00:14:58.568 Zone Descriptor Change Notices: Not Supported 00:14:58.568 Discovery Log Change Notices: Not Supported 00:14:58.568 Controller Attributes 00:14:58.568 128-bit Host Identifier: Supported 00:14:58.568 Non-Operational Permissive Mode: Not Supported 00:14:58.568 NVM Sets: Not Supported 00:14:58.568 Read Recovery Levels: Not Supported 00:14:58.568 Endurance Groups: Not Supported 00:14:58.568 Predictable Latency Mode: Not Supported 00:14:58.568 Traffic Based Keep ALive: Not Supported 00:14:58.568 Namespace Granularity: Not Supported 00:14:58.568 SQ Associations: Not Supported 00:14:58.568 UUID List: Not Supported 00:14:58.568 Multi-Domain Subsystem: Not Supported 00:14:58.568 Fixed Capacity Management: Not Supported 00:14:58.568 Variable Capacity Management: Not Supported 00:14:58.568 Delete Endurance Group: Not Supported 00:14:58.568 Delete NVM Set: Not Supported 00:14:58.568 Extended LBA Formats Supported: Not Supported 00:14:58.568 Flexible Data Placement Supported: Not Supported 00:14:58.568 00:14:58.568 Controller Memory Buffer Support 00:14:58.568 ================================ 00:14:58.568 Supported: No 00:14:58.568 00:14:58.568 Persistent Memory Region Support 00:14:58.568 ================================ 00:14:58.568 Supported: No 00:14:58.568 00:14:58.568 Admin Command Set Attributes 00:14:58.568 ============================ 00:14:58.568 Security Send/Receive: Not Supported 00:14:58.568 Format NVM: Not Supported 00:14:58.568 Firmware Activate/Download: Not Supported 00:14:58.568 Namespace Management: Not Supported 00:14:58.568 Device Self-Test: Not Supported 00:14:58.568 Directives: Not Supported 00:14:58.568 NVMe-MI: Not Supported 00:14:58.568 Virtualization Management: Not Supported 00:14:58.568 Doorbell Buffer Config: Not Supported 00:14:58.568 Get LBA Status Capability: Not Supported 00:14:58.568 Command & Feature Lockdown Capability: Not Supported 00:14:58.568 Abort Command Limit: 4 00:14:58.568 Async Event Request Limit: 4 00:14:58.568 Number of Firmware Slots: N/A 00:14:58.568 Firmware Slot 1 Read-Only: N/A 00:14:58.568 Firmware Activation Without Reset: N/A 00:14:58.568 Multiple Update Detection Support: N/A 00:14:58.568 Firmware Update Granularity: No Information Provided 00:14:58.568 Per-Namespace SMART Log: No 00:14:58.568 Asymmetric Namespace Access Log Page: Not Supported 00:14:58.569 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:58.569 Command Effects Log Page: Supported 00:14:58.569 Get Log Page Extended Data: Supported 00:14:58.569 Telemetry Log Pages: Not Supported 00:14:58.569 Persistent Event Log Pages: Not Supported 00:14:58.569 Supported Log Pages Log Page: May Support 00:14:58.569 Commands Supported & Effects Log Page: Not Supported 00:14:58.569 Feature Identifiers & Effects Log Page:May Support 00:14:58.569 NVMe-MI Commands & Effects Log Page: May Support 00:14:58.569 Data Area 4 for Telemetry Log: Not Supported 00:14:58.569 Error Log Page Entries Supported: 128 00:14:58.569 Keep Alive: Supported 00:14:58.569 Keep Alive Granularity: 10000 ms 00:14:58.569 00:14:58.569 NVM Command Set Attributes 00:14:58.569 ========================== 00:14:58.569 Submission Queue Entry Size 00:14:58.569 Max: 64 00:14:58.569 Min: 64 00:14:58.569 Completion Queue Entry Size 00:14:58.569 Max: 16 00:14:58.569 Min: 16 00:14:58.569 Number of Namespaces: 32 00:14:58.569 Compare Command: Supported 00:14:58.569 Write Uncorrectable Command: Not Supported 00:14:58.569 Dataset Management Command: Supported 00:14:58.569 Write Zeroes Command: Supported 00:14:58.569 Set Features Save Field: Not Supported 00:14:58.569 Reservations: Not Supported 00:14:58.569 Timestamp: Not Supported 00:14:58.569 Copy: Supported 00:14:58.569 Volatile Write Cache: Present 00:14:58.569 Atomic Write Unit (Normal): 1 00:14:58.569 Atomic Write Unit (PFail): 1 00:14:58.569 Atomic Compare & Write Unit: 1 00:14:58.569 Fused Compare & Write: Supported 00:14:58.569 Scatter-Gather List 00:14:58.569 SGL Command Set: Supported (Dword aligned) 00:14:58.569 SGL Keyed: Not Supported 00:14:58.569 SGL Bit Bucket Descriptor: Not Supported 00:14:58.569 SGL Metadata Pointer: Not Supported 00:14:58.569 Oversized SGL: Not Supported 00:14:58.569 SGL Metadata Address: Not Supported 00:14:58.569 SGL Offset: Not Supported 00:14:58.569 Transport SGL Data Block: Not Supported 00:14:58.569 Replay Protected Memory Block: Not Supported 00:14:58.569 00:14:58.569 Firmware Slot Information 00:14:58.569 ========================= 00:14:58.569 Active slot: 1 00:14:58.569 Slot 1 Firmware Revision: 25.01 00:14:58.569 00:14:58.569 00:14:58.569 Commands Supported and Effects 00:14:58.569 ============================== 00:14:58.569 Admin Commands 00:14:58.569 -------------- 00:14:58.569 Get Log Page (02h): Supported 00:14:58.569 Identify (06h): Supported 00:14:58.569 Abort (08h): Supported 00:14:58.569 Set Features (09h): Supported 00:14:58.569 Get Features (0Ah): Supported 00:14:58.569 Asynchronous Event Request (0Ch): Supported 00:14:58.569 Keep Alive (18h): Supported 00:14:58.569 I/O Commands 00:14:58.569 ------------ 00:14:58.569 Flush (00h): Supported LBA-Change 00:14:58.569 Write (01h): Supported LBA-Change 00:14:58.569 Read (02h): Supported 00:14:58.569 Compare (05h): Supported 00:14:58.569 Write Zeroes (08h): Supported LBA-Change 00:14:58.569 Dataset Management (09h): Supported LBA-Change 00:14:58.569 Copy (19h): Supported LBA-Change 00:14:58.569 00:14:58.569 Error Log 00:14:58.569 ========= 00:14:58.569 00:14:58.569 Arbitration 00:14:58.569 =========== 00:14:58.569 Arbitration Burst: 1 00:14:58.569 00:14:58.569 Power Management 00:14:58.569 ================ 00:14:58.569 Number of Power States: 1 00:14:58.569 Current Power State: Power State #0 00:14:58.569 Power State #0: 00:14:58.569 Max Power: 0.00 W 00:14:58.569 Non-Operational State: Operational 00:14:58.569 Entry Latency: Not Reported 00:14:58.569 Exit Latency: Not Reported 00:14:58.569 Relative Read Throughput: 0 00:14:58.569 Relative Read Latency: 0 00:14:58.569 Relative Write Throughput: 0 00:14:58.569 Relative Write Latency: 0 00:14:58.569 Idle Power: Not Reported 00:14:58.569 Active Power: Not Reported 00:14:58.569 Non-Operational Permissive Mode: Not Supported 00:14:58.569 00:14:58.569 Health Information 00:14:58.569 ================== 00:14:58.569 Critical Warnings: 00:14:58.569 Available Spare Space: OK 00:14:58.569 Temperature: OK 00:14:58.569 Device Reliability: OK 00:14:58.569 Read Only: No 00:14:58.569 Volatile Memory Backup: OK 00:14:58.569 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:58.569 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:58.569 Available Spare: 0% 00:14:58.569 Available Sp[2024-10-14 16:40:02.949915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:58.569 [2024-10-14 16:40:02.949922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:58.569 [2024-10-14 16:40:02.949948] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:58.569 [2024-10-14 16:40:02.949956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.569 [2024-10-14 16:40:02.949962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.569 [2024-10-14 16:40:02.949967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.569 [2024-10-14 16:40:02.949972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:58.569 [2024-10-14 16:40:02.953609] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:58.569 [2024-10-14 16:40:02.953621] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:58.569 [2024-10-14 16:40:02.954090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:58.569 [2024-10-14 16:40:02.954141] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:58.569 [2024-10-14 16:40:02.954151] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:58.569 [2024-10-14 16:40:02.955094] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:58.569 [2024-10-14 16:40:02.955106] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:58.569 [2024-10-14 16:40:02.955156] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:58.569 [2024-10-14 16:40:02.956125] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:58.569 are Threshold: 0% 00:14:58.569 Life Percentage Used: 0% 00:14:58.569 Data Units Read: 0 00:14:58.569 Data Units Written: 0 00:14:58.569 Host Read Commands: 0 00:14:58.569 Host Write Commands: 0 00:14:58.569 Controller Busy Time: 0 minutes 00:14:58.569 Power Cycles: 0 00:14:58.569 Power On Hours: 0 hours 00:14:58.569 Unsafe Shutdowns: 0 00:14:58.569 Unrecoverable Media Errors: 0 00:14:58.569 Lifetime Error Log Entries: 0 00:14:58.569 Warning Temperature Time: 0 minutes 00:14:58.569 Critical Temperature Time: 0 minutes 00:14:58.569 00:14:58.569 Number of Queues 00:14:58.569 ================ 00:14:58.569 Number of I/O Submission Queues: 127 00:14:58.569 Number of I/O Completion Queues: 127 00:14:58.569 00:14:58.569 Active Namespaces 00:14:58.569 ================= 00:14:58.569 Namespace ID:1 00:14:58.569 Error Recovery Timeout: Unlimited 00:14:58.569 Command Set Identifier: NVM (00h) 00:14:58.569 Deallocate: Supported 00:14:58.569 Deallocated/Unwritten Error: Not Supported 00:14:58.569 Deallocated Read Value: Unknown 00:14:58.569 Deallocate in Write Zeroes: Not Supported 00:14:58.569 Deallocated Guard Field: 0xFFFF 00:14:58.569 Flush: Supported 00:14:58.569 Reservation: Supported 00:14:58.569 Namespace Sharing Capabilities: Multiple Controllers 00:14:58.569 Size (in LBAs): 131072 (0GiB) 00:14:58.569 Capacity (in LBAs): 131072 (0GiB) 00:14:58.569 Utilization (in LBAs): 131072 (0GiB) 00:14:58.569 NGUID: 4074A5F035624C36A87E8EB0920FD5DE 00:14:58.569 UUID: 4074a5f0-3562-4c36-a87e-8eb0920fd5de 00:14:58.569 Thin Provisioning: Not Supported 00:14:58.569 Per-NS Atomic Units: Yes 00:14:58.569 Atomic Boundary Size (Normal): 0 00:14:58.569 Atomic Boundary Size (PFail): 0 00:14:58.569 Atomic Boundary Offset: 0 00:14:58.569 Maximum Single Source Range Length: 65535 00:14:58.569 Maximum Copy Length: 65535 00:14:58.569 Maximum Source Range Count: 1 00:14:58.569 NGUID/EUI64 Never Reused: No 00:14:58.569 Namespace Write Protected: No 00:14:58.569 Number of LBA Formats: 1 00:14:58.569 Current LBA Format: LBA Format #00 00:14:58.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:58.569 00:14:58.569 16:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:58.569 [2024-10-14 16:40:03.171420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:03.839 Initializing NVMe Controllers 00:15:03.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:03.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:03.839 Initialization complete. Launching workers. 00:15:03.839 ======================================================== 00:15:03.839 Latency(us) 00:15:03.839 Device Information : IOPS MiB/s Average min max 00:15:03.839 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39905.69 155.88 3207.39 953.22 9619.14 00:15:03.839 ======================================================== 00:15:03.839 Total : 39905.69 155.88 3207.39 953.22 9619.14 00:15:03.839 00:15:03.839 [2024-10-14 16:40:08.192434] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:03.839 16:40:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:03.839 [2024-10-14 16:40:08.410458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.104 Initializing NVMe Controllers 00:15:09.104 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.104 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:09.104 Initialization complete. Launching workers. 00:15:09.104 ======================================================== 00:15:09.104 Latency(us) 00:15:09.104 Device Information : IOPS MiB/s Average min max 00:15:09.105 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.98 62.70 7979.92 7766.77 8085.91 00:15:09.105 ======================================================== 00:15:09.105 Total : 16050.98 62.70 7979.92 7766.77 8085.91 00:15:09.105 00:15:09.105 [2024-10-14 16:40:13.452811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.105 16:40:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:09.105 [2024-10-14 16:40:13.648751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.372 [2024-10-14 16:40:18.724917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.372 Initializing NVMe Controllers 00:15:14.372 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.372 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:14.372 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:14.372 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:14.372 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:14.372 Initialization complete. Launching workers. 00:15:14.372 Starting thread on core 2 00:15:14.372 Starting thread on core 3 00:15:14.372 Starting thread on core 1 00:15:14.372 16:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:14.630 [2024-10-14 16:40:19.009985] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.914 [2024-10-14 16:40:22.136832] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.914 Initializing NVMe Controllers 00:15:17.914 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.914 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:17.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:17.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:17.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:17.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:17.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:17.914 Initialization complete. Launching workers. 00:15:17.914 Starting thread on core 1 with urgent priority queue 00:15:17.914 Starting thread on core 2 with urgent priority queue 00:15:17.914 Starting thread on core 3 with urgent priority queue 00:15:17.914 Starting thread on core 0 with urgent priority queue 00:15:17.914 SPDK bdev Controller (SPDK1 ) core 0: 7966.33 IO/s 12.55 secs/100000 ios 00:15:17.914 SPDK bdev Controller (SPDK1 ) core 1: 6554.67 IO/s 15.26 secs/100000 ios 00:15:17.914 SPDK bdev Controller (SPDK1 ) core 2: 6480.00 IO/s 15.43 secs/100000 ios 00:15:17.914 SPDK bdev Controller (SPDK1 ) core 3: 7465.67 IO/s 13.39 secs/100000 ios 00:15:17.914 ======================================================== 00:15:17.914 00:15:17.914 16:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:17.914 [2024-10-14 16:40:22.405337] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.914 Initializing NVMe Controllers 00:15:17.914 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.914 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:17.914 Namespace ID: 1 size: 0GB 00:15:17.914 Initialization complete. 00:15:17.914 INFO: using host memory buffer for IO 00:15:17.914 Hello world! 00:15:17.914 [2024-10-14 16:40:22.439541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.914 16:40:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:18.173 [2024-10-14 16:40:22.706697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.111 Initializing NVMe Controllers 00:15:19.111 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.111 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.111 Initialization complete. Launching workers. 00:15:19.111 submit (in ns) avg, min, max = 6608.2, 3146.7, 3999959.0 00:15:19.111 complete (in ns) avg, min, max = 18515.4, 1717.1, 4993493.3 00:15:19.111 00:15:19.111 Submit histogram 00:15:19.111 ================ 00:15:19.111 Range in us Cumulative Count 00:15:19.111 3.139 - 3.154: 0.0060% ( 1) 00:15:19.111 3.154 - 3.170: 0.0298% ( 4) 00:15:19.111 3.170 - 3.185: 0.0655% ( 6) 00:15:19.111 3.185 - 3.200: 0.1013% ( 6) 00:15:19.111 3.200 - 3.215: 0.3038% ( 34) 00:15:19.111 3.215 - 3.230: 1.5426% ( 208) 00:15:19.111 3.230 - 3.246: 4.9851% ( 578) 00:15:19.111 3.246 - 3.261: 9.2436% ( 715) 00:15:19.111 3.261 - 3.276: 13.9786% ( 795) 00:15:19.111 3.276 - 3.291: 19.8809% ( 991) 00:15:19.111 3.291 - 3.307: 26.3848% ( 1092) 00:15:19.111 3.307 - 3.322: 32.1382% ( 966) 00:15:19.111 3.322 - 3.337: 38.5706% ( 1080) 00:15:19.111 3.337 - 3.352: 44.2585% ( 955) 00:15:19.111 3.352 - 3.368: 49.5593% ( 890) 00:15:19.111 3.368 - 3.383: 56.3848% ( 1146) 00:15:19.111 3.383 - 3.398: 64.3121% ( 1331) 00:15:19.111 3.398 - 3.413: 69.3687% ( 849) 00:15:19.111 3.413 - 3.429: 74.6158% ( 881) 00:15:19.111 3.429 - 3.444: 79.1007% ( 753) 00:15:19.111 3.444 - 3.459: 82.1501% ( 512) 00:15:19.111 3.459 - 3.474: 84.3955% ( 377) 00:15:19.111 3.474 - 3.490: 85.9500% ( 261) 00:15:19.111 3.490 - 3.505: 86.9446% ( 167) 00:15:19.111 3.505 - 3.520: 87.7129% ( 129) 00:15:19.111 3.520 - 3.535: 88.3383% ( 105) 00:15:19.111 3.535 - 3.550: 89.0947% ( 127) 00:15:19.111 3.550 - 3.566: 89.8690% ( 130) 00:15:19.111 3.566 - 3.581: 90.6730% ( 135) 00:15:19.111 3.581 - 3.596: 91.5843% ( 153) 00:15:19.111 3.596 - 3.611: 92.4836% ( 151) 00:15:19.111 3.611 - 3.627: 93.4306% ( 159) 00:15:19.111 3.627 - 3.642: 94.5861% ( 194) 00:15:19.111 3.642 - 3.657: 95.5628% ( 164) 00:15:19.111 3.657 - 3.672: 96.3192% ( 127) 00:15:19.111 3.672 - 3.688: 97.0280% ( 119) 00:15:19.111 3.688 - 3.703: 97.7129% ( 115) 00:15:19.111 3.703 - 3.718: 98.2192% ( 85) 00:15:19.111 3.718 - 3.733: 98.5587% ( 57) 00:15:19.111 3.733 - 3.749: 98.8267% ( 45) 00:15:19.111 3.749 - 3.764: 99.0530% ( 38) 00:15:19.111 3.764 - 3.779: 99.2198% ( 28) 00:15:19.111 3.779 - 3.794: 99.3627% ( 24) 00:15:19.111 3.794 - 3.810: 99.4521% ( 15) 00:15:19.111 3.810 - 3.825: 99.4759% ( 4) 00:15:19.111 3.825 - 3.840: 99.5414% ( 11) 00:15:19.111 3.840 - 3.855: 99.5771% ( 6) 00:15:19.111 3.855 - 3.870: 99.5890% ( 2) 00:15:19.111 3.870 - 3.886: 99.5950% ( 1) 00:15:19.111 3.886 - 3.901: 99.6010% ( 1) 00:15:19.111 3.901 - 3.931: 99.6069% ( 1) 00:15:19.111 3.931 - 3.962: 99.6129% ( 1) 00:15:19.111 3.992 - 4.023: 99.6188% ( 1) 00:15:19.111 4.602 - 4.632: 99.6248% ( 1) 00:15:19.111 4.693 - 4.724: 99.6307% ( 1) 00:15:19.111 4.754 - 4.785: 99.6367% ( 1) 00:15:19.111 4.846 - 4.876: 99.6546% ( 3) 00:15:19.111 4.876 - 4.907: 99.6605% ( 1) 00:15:19.111 4.937 - 4.968: 99.6665% ( 1) 00:15:19.111 4.998 - 5.029: 99.6784% ( 2) 00:15:19.111 5.059 - 5.090: 99.6843% ( 1) 00:15:19.111 5.090 - 5.120: 99.6962% ( 2) 00:15:19.111 5.150 - 5.181: 99.7022% ( 1) 00:15:19.111 5.181 - 5.211: 99.7082% ( 1) 00:15:19.111 5.303 - 5.333: 99.7201% ( 2) 00:15:19.111 5.425 - 5.455: 99.7260% ( 1) 00:15:19.111 5.455 - 5.486: 99.7379% ( 2) 00:15:19.111 5.486 - 5.516: 99.7618% ( 4) 00:15:19.111 5.516 - 5.547: 99.7677% ( 1) 00:15:19.111 5.547 - 5.577: 99.7796% ( 2) 00:15:19.111 5.577 - 5.608: 99.7856% ( 1) 00:15:19.111 5.638 - 5.669: 99.7915% ( 1) 00:15:19.111 5.699 - 5.730: 99.7975% ( 1) 00:15:19.111 5.790 - 5.821: 99.8094% ( 2) 00:15:19.111 5.851 - 5.882: 99.8213% ( 2) 00:15:19.111 5.882 - 5.912: 99.8273% ( 1) 00:15:19.111 5.943 - 5.973: 99.8332% ( 1) 00:15:19.111 6.339 - 6.370: 99.8392% ( 1) 00:15:19.111 6.370 - 6.400: 99.8451% ( 1) 00:15:19.111 [2024-10-14 16:40:23.731615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.370 6.461 - 6.491: 99.8511% ( 1) 00:15:19.370 6.522 - 6.552: 99.8571% ( 1) 00:15:19.370 6.735 - 6.766: 99.8630% ( 1) 00:15:19.370 6.857 - 6.888: 99.8690% ( 1) 00:15:19.370 7.070 - 7.101: 99.8749% ( 1) 00:15:19.370 7.131 - 7.162: 99.8809% ( 1) 00:15:19.370 7.314 - 7.345: 99.8868% ( 1) 00:15:19.370 7.345 - 7.375: 99.8928% ( 1) 00:15:19.370 8.046 - 8.107: 99.8987% ( 1) 00:15:19.370 8.716 - 8.777: 99.9047% ( 1) 00:15:19.370 12.678 - 12.739: 99.9107% ( 1) 00:15:19.370 15.421 - 15.482: 99.9166% ( 1) 00:15:19.370 3011.535 - 3027.139: 99.9226% ( 1) 00:15:19.370 3027.139 - 3042.743: 99.9285% ( 1) 00:15:19.370 3994.575 - 4025.783: 100.0000% ( 12) 00:15:19.370 00:15:19.370 Complete histogram 00:15:19.370 ================== 00:15:19.370 Range in us Cumulative Count 00:15:19.370 1.714 - 1.722: 0.0357% ( 6) 00:15:19.370 1.722 - 1.730: 0.1132% ( 13) 00:15:19.370 1.730 - 1.737: 0.2382% ( 21) 00:15:19.370 1.737 - 1.745: 0.2799% ( 7) 00:15:19.370 1.745 - 1.752: 0.2978% ( 3) 00:15:19.370 1.752 - 1.760: 0.3097% ( 2) 00:15:19.370 1.760 - 1.768: 1.5664% ( 211) 00:15:19.370 1.768 - 1.775: 13.3055% ( 1971) 00:15:19.370 1.775 - 1.783: 46.1048% ( 5507) 00:15:19.370 1.783 - 1.790: 75.0566% ( 4861) 00:15:19.370 1.790 - 1.798: 84.5265% ( 1590) 00:15:19.370 1.798 - 1.806: 87.7248% ( 537) 00:15:19.370 1.806 - 1.813: 90.5182% ( 469) 00:15:19.370 1.813 - 1.821: 91.7094% ( 200) 00:15:19.370 1.821 - 1.829: 92.2811% ( 96) 00:15:19.370 1.829 - 1.836: 93.3294% ( 176) 00:15:19.370 1.836 - 1.844: 94.6218% ( 217) 00:15:19.370 1.844 - 1.851: 95.8428% ( 205) 00:15:19.370 1.851 - 1.859: 97.0816% ( 208) 00:15:19.370 1.859 - 1.867: 98.0881% ( 169) 00:15:19.371 1.867 - 1.874: 98.6242% ( 90) 00:15:19.371 1.874 - 1.882: 98.8803% ( 43) 00:15:19.371 1.882 - 1.890: 98.9934% ( 19) 00:15:19.371 1.890 - 1.897: 99.1423% ( 25) 00:15:19.371 1.897 - 1.905: 99.2079% ( 11) 00:15:19.371 1.905 - 1.912: 99.2496% ( 7) 00:15:19.371 1.912 - 1.920: 99.2734% ( 4) 00:15:19.371 1.920 - 1.928: 99.3151% ( 7) 00:15:19.371 1.928 - 1.935: 99.3448% ( 5) 00:15:19.371 1.935 - 1.943: 99.3627% ( 3) 00:15:19.371 1.943 - 1.950: 99.3687% ( 1) 00:15:19.371 1.950 - 1.966: 99.3985% ( 5) 00:15:19.371 1.966 - 1.981: 99.4104% ( 2) 00:15:19.371 1.996 - 2.011: 99.4223% ( 2) 00:15:19.371 2.011 - 2.027: 99.4282% ( 1) 00:15:19.371 2.027 - 2.042: 99.4342% ( 1) 00:15:19.371 2.133 - 2.149: 99.4401% ( 1) 00:15:19.371 2.164 - 2.179: 99.4461% ( 1) 00:15:19.371 2.270 - 2.286: 99.4521% ( 1) 00:15:19.371 2.423 - 2.438: 99.4580% ( 1) 00:15:19.371 2.575 - 2.590: 99.4640% ( 1) 00:15:19.371 3.550 - 3.566: 99.4699% ( 1) 00:15:19.371 3.596 - 3.611: 99.4759% ( 1) 00:15:19.371 3.703 - 3.718: 99.4818% ( 1) 00:15:19.371 4.297 - 4.328: 99.4878% ( 1) 00:15:19.371 4.389 - 4.419: 99.4937% ( 1) 00:15:19.371 4.450 - 4.480: 99.4997% ( 1) 00:15:19.371 4.571 - 4.602: 99.5057% ( 1) 00:15:19.371 4.693 - 4.724: 99.5116% ( 1) 00:15:19.371 4.907 - 4.937: 99.5176% ( 1) 00:15:19.371 4.937 - 4.968: 99.5235% ( 1) 00:15:19.371 4.998 - 5.029: 99.5295% ( 1) 00:15:19.371 5.516 - 5.547: 99.5354% ( 1) 00:15:19.371 5.577 - 5.608: 99.5414% ( 1) 00:15:19.371 5.608 - 5.638: 99.5473% ( 1) 00:15:19.371 5.669 - 5.699: 99.5533% ( 1) 00:15:19.371 5.760 - 5.790: 99.5593% ( 1) 00:15:19.371 5.851 - 5.882: 99.5652% ( 1) 00:15:19.371 6.034 - 6.065: 99.5712% ( 1) 00:15:19.371 7.253 - 7.284: 99.5771% ( 1) 00:15:19.371 146.286 - 147.261: 99.5831% ( 1) 00:15:19.371 3011.535 - 3027.139: 99.5950% ( 2) 00:15:19.371 3978.971 - 3994.575: 99.6010% ( 1) 00:15:19.371 3994.575 - 4025.783: 99.9821% ( 64) 00:15:19.371 4962.011 - 4993.219: 99.9940% ( 2) 00:15:19.371 4993.219 - 5024.427: 100.0000% ( 1) 00:15:19.371 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.371 [ 00:15:19.371 { 00:15:19.371 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.371 "subtype": "Discovery", 00:15:19.371 "listen_addresses": [], 00:15:19.371 "allow_any_host": true, 00:15:19.371 "hosts": [] 00:15:19.371 }, 00:15:19.371 { 00:15:19.371 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.371 "subtype": "NVMe", 00:15:19.371 "listen_addresses": [ 00:15:19.371 { 00:15:19.371 "trtype": "VFIOUSER", 00:15:19.371 "adrfam": "IPv4", 00:15:19.371 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.371 "trsvcid": "0" 00:15:19.371 } 00:15:19.371 ], 00:15:19.371 "allow_any_host": true, 00:15:19.371 "hosts": [], 00:15:19.371 "serial_number": "SPDK1", 00:15:19.371 "model_number": "SPDK bdev Controller", 00:15:19.371 "max_namespaces": 32, 00:15:19.371 "min_cntlid": 1, 00:15:19.371 "max_cntlid": 65519, 00:15:19.371 "namespaces": [ 00:15:19.371 { 00:15:19.371 "nsid": 1, 00:15:19.371 "bdev_name": "Malloc1", 00:15:19.371 "name": "Malloc1", 00:15:19.371 "nguid": "4074A5F035624C36A87E8EB0920FD5DE", 00:15:19.371 "uuid": "4074a5f0-3562-4c36-a87e-8eb0920fd5de" 00:15:19.371 } 00:15:19.371 ] 00:15:19.371 }, 00:15:19.371 { 00:15:19.371 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.371 "subtype": "NVMe", 00:15:19.371 "listen_addresses": [ 00:15:19.371 { 00:15:19.371 "trtype": "VFIOUSER", 00:15:19.371 "adrfam": "IPv4", 00:15:19.371 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.371 "trsvcid": "0" 00:15:19.371 } 00:15:19.371 ], 00:15:19.371 "allow_any_host": true, 00:15:19.371 "hosts": [], 00:15:19.371 "serial_number": "SPDK2", 00:15:19.371 "model_number": "SPDK bdev Controller", 00:15:19.371 "max_namespaces": 32, 00:15:19.371 "min_cntlid": 1, 00:15:19.371 "max_cntlid": 65519, 00:15:19.371 "namespaces": [ 00:15:19.371 { 00:15:19.371 "nsid": 1, 00:15:19.371 "bdev_name": "Malloc2", 00:15:19.371 "name": "Malloc2", 00:15:19.371 "nguid": "2007F78EBB004D1381EF0D04366F266D", 00:15:19.371 "uuid": "2007f78e-bb00-4d13-81ef-0d04366f266d" 00:15:19.371 } 00:15:19.371 ] 00:15:19.371 } 00:15:19.371 ] 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=510441 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:19.371 16:40:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:19.629 [2024-10-14 16:40:24.128108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:19.629 Malloc3 00:15:19.629 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:19.887 [2024-10-14 16:40:24.371838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:19.887 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:19.887 Asynchronous Event Request test 00:15:19.887 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.887 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.887 Registering asynchronous event callbacks... 00:15:19.887 Starting namespace attribute notice tests for all controllers... 00:15:19.887 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:19.887 aer_cb - Changed Namespace 00:15:19.887 Cleaning up... 00:15:20.147 [ 00:15:20.147 { 00:15:20.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:20.147 "subtype": "Discovery", 00:15:20.147 "listen_addresses": [], 00:15:20.147 "allow_any_host": true, 00:15:20.147 "hosts": [] 00:15:20.147 }, 00:15:20.147 { 00:15:20.147 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:20.147 "subtype": "NVMe", 00:15:20.147 "listen_addresses": [ 00:15:20.147 { 00:15:20.147 "trtype": "VFIOUSER", 00:15:20.147 "adrfam": "IPv4", 00:15:20.147 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:20.147 "trsvcid": "0" 00:15:20.147 } 00:15:20.147 ], 00:15:20.147 "allow_any_host": true, 00:15:20.147 "hosts": [], 00:15:20.147 "serial_number": "SPDK1", 00:15:20.147 "model_number": "SPDK bdev Controller", 00:15:20.147 "max_namespaces": 32, 00:15:20.147 "min_cntlid": 1, 00:15:20.147 "max_cntlid": 65519, 00:15:20.147 "namespaces": [ 00:15:20.147 { 00:15:20.147 "nsid": 1, 00:15:20.148 "bdev_name": "Malloc1", 00:15:20.148 "name": "Malloc1", 00:15:20.148 "nguid": "4074A5F035624C36A87E8EB0920FD5DE", 00:15:20.148 "uuid": "4074a5f0-3562-4c36-a87e-8eb0920fd5de" 00:15:20.148 }, 00:15:20.148 { 00:15:20.148 "nsid": 2, 00:15:20.148 "bdev_name": "Malloc3", 00:15:20.148 "name": "Malloc3", 00:15:20.148 "nguid": "97813ED214F3409389C83220AC56D0BD", 00:15:20.148 "uuid": "97813ed2-14f3-4093-89c8-3220ac56d0bd" 00:15:20.148 } 00:15:20.148 ] 00:15:20.148 }, 00:15:20.148 { 00:15:20.148 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:20.148 "subtype": "NVMe", 00:15:20.148 "listen_addresses": [ 00:15:20.148 { 00:15:20.148 "trtype": "VFIOUSER", 00:15:20.148 "adrfam": "IPv4", 00:15:20.148 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:20.148 "trsvcid": "0" 00:15:20.148 } 00:15:20.148 ], 00:15:20.148 "allow_any_host": true, 00:15:20.148 "hosts": [], 00:15:20.148 "serial_number": "SPDK2", 00:15:20.148 "model_number": "SPDK bdev Controller", 00:15:20.148 "max_namespaces": 32, 00:15:20.148 "min_cntlid": 1, 00:15:20.148 "max_cntlid": 65519, 00:15:20.148 "namespaces": [ 00:15:20.148 { 00:15:20.148 "nsid": 1, 00:15:20.148 "bdev_name": "Malloc2", 00:15:20.148 "name": "Malloc2", 00:15:20.148 "nguid": "2007F78EBB004D1381EF0D04366F266D", 00:15:20.148 "uuid": "2007f78e-bb00-4d13-81ef-0d04366f266d" 00:15:20.148 } 00:15:20.148 ] 00:15:20.148 } 00:15:20.148 ] 00:15:20.148 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 510441 00:15:20.148 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.148 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:20.148 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:20.148 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:20.148 [2024-10-14 16:40:24.613635] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:15:20.148 [2024-10-14 16:40:24.613667] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510456 ] 00:15:20.148 [2024-10-14 16:40:24.641766] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:20.148 [2024-10-14 16:40:24.649812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.148 [2024-10-14 16:40:24.649834] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2dcc8a7000 00:15:20.148 [2024-10-14 16:40:24.650814] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.651830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.652829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.653829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.654843] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.655856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.656856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.657863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:20.148 [2024-10-14 16:40:24.658872] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:20.148 [2024-10-14 16:40:24.658885] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2dcc89c000 00:15:20.148 [2024-10-14 16:40:24.659804] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.148 [2024-10-14 16:40:24.669149] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:20.148 [2024-10-14 16:40:24.669176] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:20.148 [2024-10-14 16:40:24.674253] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:20.148 [2024-10-14 16:40:24.674290] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:20.148 [2024-10-14 16:40:24.674357] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:20.148 [2024-10-14 16:40:24.674373] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:20.148 [2024-10-14 16:40:24.674378] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:20.148 [2024-10-14 16:40:24.675259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:20.148 [2024-10-14 16:40:24.675268] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:20.148 [2024-10-14 16:40:24.675279] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:20.148 [2024-10-14 16:40:24.676271] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:20.148 [2024-10-14 16:40:24.676280] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:20.148 [2024-10-14 16:40:24.676287] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:20.148 [2024-10-14 16:40:24.677275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:20.148 [2024-10-14 16:40:24.677285] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:20.148 [2024-10-14 16:40:24.678284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:20.148 [2024-10-14 16:40:24.678293] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:20.148 [2024-10-14 16:40:24.678298] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:20.148 [2024-10-14 16:40:24.678303] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:20.148 [2024-10-14 16:40:24.678409] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:20.148 [2024-10-14 16:40:24.678413] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:20.148 [2024-10-14 16:40:24.678417] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:20.148 [2024-10-14 16:40:24.679286] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:20.148 [2024-10-14 16:40:24.680292] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:20.148 [2024-10-14 16:40:24.681301] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:20.148 [2024-10-14 16:40:24.682308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:20.148 [2024-10-14 16:40:24.682355] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:20.148 [2024-10-14 16:40:24.683318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:20.148 [2024-10-14 16:40:24.683328] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:20.148 [2024-10-14 16:40:24.683333] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:20.148 [2024-10-14 16:40:24.683350] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:20.148 [2024-10-14 16:40:24.683361] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.683373] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.149 [2024-10-14 16:40:24.683380] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.149 [2024-10-14 16:40:24.683383] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.683395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.689608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.689619] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:20.149 [2024-10-14 16:40:24.689624] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:20.149 [2024-10-14 16:40:24.689628] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:20.149 [2024-10-14 16:40:24.689632] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:20.149 [2024-10-14 16:40:24.689637] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:20.149 [2024-10-14 16:40:24.689642] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:20.149 [2024-10-14 16:40:24.689646] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.689652] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.689664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.697605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.697617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.149 [2024-10-14 16:40:24.697624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.149 [2024-10-14 16:40:24.697631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.149 [2024-10-14 16:40:24.697639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.149 [2024-10-14 16:40:24.697643] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.697652] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.697660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.705604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.705612] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:20.149 [2024-10-14 16:40:24.705617] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.705625] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.705632] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.705640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.713605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.713656] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.713663] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.713670] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:20.149 [2024-10-14 16:40:24.713675] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:20.149 [2024-10-14 16:40:24.713678] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.713683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.721604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.721616] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:20.149 [2024-10-14 16:40:24.721624] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.721632] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.721638] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.149 [2024-10-14 16:40:24.721642] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.149 [2024-10-14 16:40:24.721645] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.721650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.729605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.729618] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.729625] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.729631] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:20.149 [2024-10-14 16:40:24.729635] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.149 [2024-10-14 16:40:24.729638] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.729644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.736606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.736618] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736625] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736635] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736641] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736645] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736650] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736655] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:20.149 [2024-10-14 16:40:24.736659] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:20.149 [2024-10-14 16:40:24.736663] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:20.149 [2024-10-14 16:40:24.736679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.745606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.745618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.753615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.761606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.761617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.769607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:20.149 [2024-10-14 16:40:24.769628] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:20.149 [2024-10-14 16:40:24.769634] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:20.149 [2024-10-14 16:40:24.769637] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:20.149 [2024-10-14 16:40:24.769640] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:20.149 [2024-10-14 16:40:24.769643] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:20.149 [2024-10-14 16:40:24.769649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:20.149 [2024-10-14 16:40:24.769655] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:20.149 [2024-10-14 16:40:24.769659] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:20.149 [2024-10-14 16:40:24.769662] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.769668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.769674] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:20.149 [2024-10-14 16:40:24.769678] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:20.149 [2024-10-14 16:40:24.769682] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.769688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.769695] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:20.149 [2024-10-14 16:40:24.769698] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:20.149 [2024-10-14 16:40:24.769701] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:20.149 [2024-10-14 16:40:24.769707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:20.149 [2024-10-14 16:40:24.777608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:20.150 [2024-10-14 16:40:24.777622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:20.150 [2024-10-14 16:40:24.777633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:20.150 [2024-10-14 16:40:24.777639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:20.150 ===================================================== 00:15:20.150 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:20.150 ===================================================== 00:15:20.150 Controller Capabilities/Features 00:15:20.150 ================================ 00:15:20.150 Vendor ID: 4e58 00:15:20.150 Subsystem Vendor ID: 4e58 00:15:20.150 Serial Number: SPDK2 00:15:20.150 Model Number: SPDK bdev Controller 00:15:20.150 Firmware Version: 25.01 00:15:20.150 Recommended Arb Burst: 6 00:15:20.150 IEEE OUI Identifier: 8d 6b 50 00:15:20.150 Multi-path I/O 00:15:20.150 May have multiple subsystem ports: Yes 00:15:20.150 May have multiple controllers: Yes 00:15:20.150 Associated with SR-IOV VF: No 00:15:20.150 Max Data Transfer Size: 131072 00:15:20.150 Max Number of Namespaces: 32 00:15:20.150 Max Number of I/O Queues: 127 00:15:20.150 NVMe Specification Version (VS): 1.3 00:15:20.150 NVMe Specification Version (Identify): 1.3 00:15:20.150 Maximum Queue Entries: 256 00:15:20.150 Contiguous Queues Required: Yes 00:15:20.150 Arbitration Mechanisms Supported 00:15:20.150 Weighted Round Robin: Not Supported 00:15:20.150 Vendor Specific: Not Supported 00:15:20.150 Reset Timeout: 15000 ms 00:15:20.150 Doorbell Stride: 4 bytes 00:15:20.150 NVM Subsystem Reset: Not Supported 00:15:20.150 Command Sets Supported 00:15:20.150 NVM Command Set: Supported 00:15:20.150 Boot Partition: Not Supported 00:15:20.150 Memory Page Size Minimum: 4096 bytes 00:15:20.150 Memory Page Size Maximum: 4096 bytes 00:15:20.150 Persistent Memory Region: Not Supported 00:15:20.150 Optional Asynchronous Events Supported 00:15:20.150 Namespace Attribute Notices: Supported 00:15:20.150 Firmware Activation Notices: Not Supported 00:15:20.150 ANA Change Notices: Not Supported 00:15:20.150 PLE Aggregate Log Change Notices: Not Supported 00:15:20.150 LBA Status Info Alert Notices: Not Supported 00:15:20.150 EGE Aggregate Log Change Notices: Not Supported 00:15:20.150 Normal NVM Subsystem Shutdown event: Not Supported 00:15:20.150 Zone Descriptor Change Notices: Not Supported 00:15:20.150 Discovery Log Change Notices: Not Supported 00:15:20.150 Controller Attributes 00:15:20.150 128-bit Host Identifier: Supported 00:15:20.150 Non-Operational Permissive Mode: Not Supported 00:15:20.150 NVM Sets: Not Supported 00:15:20.150 Read Recovery Levels: Not Supported 00:15:20.150 Endurance Groups: Not Supported 00:15:20.150 Predictable Latency Mode: Not Supported 00:15:20.150 Traffic Based Keep ALive: Not Supported 00:15:20.150 Namespace Granularity: Not Supported 00:15:20.150 SQ Associations: Not Supported 00:15:20.150 UUID List: Not Supported 00:15:20.150 Multi-Domain Subsystem: Not Supported 00:15:20.150 Fixed Capacity Management: Not Supported 00:15:20.150 Variable Capacity Management: Not Supported 00:15:20.150 Delete Endurance Group: Not Supported 00:15:20.150 Delete NVM Set: Not Supported 00:15:20.150 Extended LBA Formats Supported: Not Supported 00:15:20.150 Flexible Data Placement Supported: Not Supported 00:15:20.150 00:15:20.150 Controller Memory Buffer Support 00:15:20.150 ================================ 00:15:20.150 Supported: No 00:15:20.150 00:15:20.150 Persistent Memory Region Support 00:15:20.150 ================================ 00:15:20.150 Supported: No 00:15:20.150 00:15:20.150 Admin Command Set Attributes 00:15:20.150 ============================ 00:15:20.150 Security Send/Receive: Not Supported 00:15:20.150 Format NVM: Not Supported 00:15:20.150 Firmware Activate/Download: Not Supported 00:15:20.150 Namespace Management: Not Supported 00:15:20.150 Device Self-Test: Not Supported 00:15:20.150 Directives: Not Supported 00:15:20.150 NVMe-MI: Not Supported 00:15:20.150 Virtualization Management: Not Supported 00:15:20.150 Doorbell Buffer Config: Not Supported 00:15:20.150 Get LBA Status Capability: Not Supported 00:15:20.150 Command & Feature Lockdown Capability: Not Supported 00:15:20.150 Abort Command Limit: 4 00:15:20.150 Async Event Request Limit: 4 00:15:20.150 Number of Firmware Slots: N/A 00:15:20.150 Firmware Slot 1 Read-Only: N/A 00:15:20.150 Firmware Activation Without Reset: N/A 00:15:20.150 Multiple Update Detection Support: N/A 00:15:20.150 Firmware Update Granularity: No Information Provided 00:15:20.150 Per-Namespace SMART Log: No 00:15:20.150 Asymmetric Namespace Access Log Page: Not Supported 00:15:20.150 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:20.150 Command Effects Log Page: Supported 00:15:20.150 Get Log Page Extended Data: Supported 00:15:20.150 Telemetry Log Pages: Not Supported 00:15:20.150 Persistent Event Log Pages: Not Supported 00:15:20.150 Supported Log Pages Log Page: May Support 00:15:20.150 Commands Supported & Effects Log Page: Not Supported 00:15:20.150 Feature Identifiers & Effects Log Page:May Support 00:15:20.150 NVMe-MI Commands & Effects Log Page: May Support 00:15:20.150 Data Area 4 for Telemetry Log: Not Supported 00:15:20.150 Error Log Page Entries Supported: 128 00:15:20.150 Keep Alive: Supported 00:15:20.150 Keep Alive Granularity: 10000 ms 00:15:20.150 00:15:20.150 NVM Command Set Attributes 00:15:20.150 ========================== 00:15:20.150 Submission Queue Entry Size 00:15:20.150 Max: 64 00:15:20.150 Min: 64 00:15:20.150 Completion Queue Entry Size 00:15:20.150 Max: 16 00:15:20.150 Min: 16 00:15:20.150 Number of Namespaces: 32 00:15:20.150 Compare Command: Supported 00:15:20.150 Write Uncorrectable Command: Not Supported 00:15:20.150 Dataset Management Command: Supported 00:15:20.150 Write Zeroes Command: Supported 00:15:20.150 Set Features Save Field: Not Supported 00:15:20.150 Reservations: Not Supported 00:15:20.150 Timestamp: Not Supported 00:15:20.150 Copy: Supported 00:15:20.150 Volatile Write Cache: Present 00:15:20.150 Atomic Write Unit (Normal): 1 00:15:20.150 Atomic Write Unit (PFail): 1 00:15:20.150 Atomic Compare & Write Unit: 1 00:15:20.150 Fused Compare & Write: Supported 00:15:20.150 Scatter-Gather List 00:15:20.150 SGL Command Set: Supported (Dword aligned) 00:15:20.150 SGL Keyed: Not Supported 00:15:20.150 SGL Bit Bucket Descriptor: Not Supported 00:15:20.150 SGL Metadata Pointer: Not Supported 00:15:20.150 Oversized SGL: Not Supported 00:15:20.150 SGL Metadata Address: Not Supported 00:15:20.150 SGL Offset: Not Supported 00:15:20.150 Transport SGL Data Block: Not Supported 00:15:20.150 Replay Protected Memory Block: Not Supported 00:15:20.150 00:15:20.150 Firmware Slot Information 00:15:20.150 ========================= 00:15:20.150 Active slot: 1 00:15:20.150 Slot 1 Firmware Revision: 25.01 00:15:20.150 00:15:20.150 00:15:20.150 Commands Supported and Effects 00:15:20.150 ============================== 00:15:20.150 Admin Commands 00:15:20.150 -------------- 00:15:20.150 Get Log Page (02h): Supported 00:15:20.150 Identify (06h): Supported 00:15:20.150 Abort (08h): Supported 00:15:20.150 Set Features (09h): Supported 00:15:20.150 Get Features (0Ah): Supported 00:15:20.150 Asynchronous Event Request (0Ch): Supported 00:15:20.150 Keep Alive (18h): Supported 00:15:20.150 I/O Commands 00:15:20.150 ------------ 00:15:20.150 Flush (00h): Supported LBA-Change 00:15:20.150 Write (01h): Supported LBA-Change 00:15:20.150 Read (02h): Supported 00:15:20.150 Compare (05h): Supported 00:15:20.150 Write Zeroes (08h): Supported LBA-Change 00:15:20.150 Dataset Management (09h): Supported LBA-Change 00:15:20.150 Copy (19h): Supported LBA-Change 00:15:20.150 00:15:20.150 Error Log 00:15:20.150 ========= 00:15:20.150 00:15:20.150 Arbitration 00:15:20.150 =========== 00:15:20.150 Arbitration Burst: 1 00:15:20.150 00:15:20.150 Power Management 00:15:20.150 ================ 00:15:20.150 Number of Power States: 1 00:15:20.150 Current Power State: Power State #0 00:15:20.150 Power State #0: 00:15:20.150 Max Power: 0.00 W 00:15:20.150 Non-Operational State: Operational 00:15:20.150 Entry Latency: Not Reported 00:15:20.150 Exit Latency: Not Reported 00:15:20.150 Relative Read Throughput: 0 00:15:20.150 Relative Read Latency: 0 00:15:20.150 Relative Write Throughput: 0 00:15:20.150 Relative Write Latency: 0 00:15:20.150 Idle Power: Not Reported 00:15:20.150 Active Power: Not Reported 00:15:20.150 Non-Operational Permissive Mode: Not Supported 00:15:20.150 00:15:20.151 Health Information 00:15:20.151 ================== 00:15:20.151 Critical Warnings: 00:15:20.151 Available Spare Space: OK 00:15:20.151 Temperature: OK 00:15:20.151 Device Reliability: OK 00:15:20.151 Read Only: No 00:15:20.151 Volatile Memory Backup: OK 00:15:20.151 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:20.151 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:20.151 Available Spare: 0% 00:15:20.151 Available Sp[2024-10-14 16:40:24.777726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:20.409 [2024-10-14 16:40:24.785606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:20.409 [2024-10-14 16:40:24.785639] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:20.409 [2024-10-14 16:40:24.785648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.409 [2024-10-14 16:40:24.785653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.409 [2024-10-14 16:40:24.785659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.409 [2024-10-14 16:40:24.785664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.409 [2024-10-14 16:40:24.785721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:20.409 [2024-10-14 16:40:24.785732] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:20.409 [2024-10-14 16:40:24.786721] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:20.409 [2024-10-14 16:40:24.786769] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:20.409 [2024-10-14 16:40:24.786778] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:20.409 [2024-10-14 16:40:24.787736] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:20.409 [2024-10-14 16:40:24.787753] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:20.409 [2024-10-14 16:40:24.787801] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:20.409 [2024-10-14 16:40:24.790607] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:20.409 are Threshold: 0% 00:15:20.409 Life Percentage Used: 0% 00:15:20.409 Data Units Read: 0 00:15:20.409 Data Units Written: 0 00:15:20.409 Host Read Commands: 0 00:15:20.409 Host Write Commands: 0 00:15:20.409 Controller Busy Time: 0 minutes 00:15:20.409 Power Cycles: 0 00:15:20.409 Power On Hours: 0 hours 00:15:20.409 Unsafe Shutdowns: 0 00:15:20.409 Unrecoverable Media Errors: 0 00:15:20.409 Lifetime Error Log Entries: 0 00:15:20.409 Warning Temperature Time: 0 minutes 00:15:20.409 Critical Temperature Time: 0 minutes 00:15:20.409 00:15:20.409 Number of Queues 00:15:20.409 ================ 00:15:20.409 Number of I/O Submission Queues: 127 00:15:20.409 Number of I/O Completion Queues: 127 00:15:20.409 00:15:20.409 Active Namespaces 00:15:20.410 ================= 00:15:20.410 Namespace ID:1 00:15:20.410 Error Recovery Timeout: Unlimited 00:15:20.410 Command Set Identifier: NVM (00h) 00:15:20.410 Deallocate: Supported 00:15:20.410 Deallocated/Unwritten Error: Not Supported 00:15:20.410 Deallocated Read Value: Unknown 00:15:20.410 Deallocate in Write Zeroes: Not Supported 00:15:20.410 Deallocated Guard Field: 0xFFFF 00:15:20.410 Flush: Supported 00:15:20.410 Reservation: Supported 00:15:20.410 Namespace Sharing Capabilities: Multiple Controllers 00:15:20.410 Size (in LBAs): 131072 (0GiB) 00:15:20.410 Capacity (in LBAs): 131072 (0GiB) 00:15:20.410 Utilization (in LBAs): 131072 (0GiB) 00:15:20.410 NGUID: 2007F78EBB004D1381EF0D04366F266D 00:15:20.410 UUID: 2007f78e-bb00-4d13-81ef-0d04366f266d 00:15:20.410 Thin Provisioning: Not Supported 00:15:20.410 Per-NS Atomic Units: Yes 00:15:20.410 Atomic Boundary Size (Normal): 0 00:15:20.410 Atomic Boundary Size (PFail): 0 00:15:20.410 Atomic Boundary Offset: 0 00:15:20.410 Maximum Single Source Range Length: 65535 00:15:20.410 Maximum Copy Length: 65535 00:15:20.410 Maximum Source Range Count: 1 00:15:20.410 NGUID/EUI64 Never Reused: No 00:15:20.410 Namespace Write Protected: No 00:15:20.410 Number of LBA Formats: 1 00:15:20.410 Current LBA Format: LBA Format #00 00:15:20.410 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:20.410 00:15:20.410 16:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:20.410 [2024-10-14 16:40:25.011837] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.674 Initializing NVMe Controllers 00:15:25.674 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.674 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:25.674 Initialization complete. Launching workers. 00:15:25.674 ======================================================== 00:15:25.674 Latency(us) 00:15:25.674 Device Information : IOPS MiB/s Average min max 00:15:25.674 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39965.49 156.12 3202.62 947.66 6663.92 00:15:25.674 ======================================================== 00:15:25.674 Total : 39965.49 156.12 3202.62 947.66 6663.92 00:15:25.674 00:15:25.674 [2024-10-14 16:40:30.115855] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.674 16:40:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:25.931 [2024-10-14 16:40:30.336549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.194 Initializing NVMe Controllers 00:15:31.194 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.194 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:31.194 Initialization complete. Launching workers. 00:15:31.194 ======================================================== 00:15:31.194 Latency(us) 00:15:31.194 Device Information : IOPS MiB/s Average min max 00:15:31.194 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39885.00 155.80 3208.83 985.49 7483.16 00:15:31.194 ======================================================== 00:15:31.194 Total : 39885.00 155.80 3208.83 985.49 7483.16 00:15:31.194 00:15:31.194 [2024-10-14 16:40:35.359795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.194 16:40:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:31.194 [2024-10-14 16:40:35.557014] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.543 [2024-10-14 16:40:40.692700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.543 Initializing NVMe Controllers 00:15:36.543 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.543 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:36.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:36.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:36.543 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:36.543 Initialization complete. Launching workers. 00:15:36.543 Starting thread on core 2 00:15:36.543 Starting thread on core 3 00:15:36.543 Starting thread on core 1 00:15:36.543 16:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:36.543 [2024-10-14 16:40:40.973024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.822 [2024-10-14 16:40:44.184838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.822 Initializing NVMe Controllers 00:15:39.822 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.822 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:39.822 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:39.822 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:39.822 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:39.822 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:39.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:39.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:39.822 Initialization complete. Launching workers. 00:15:39.822 Starting thread on core 1 with urgent priority queue 00:15:39.822 Starting thread on core 2 with urgent priority queue 00:15:39.822 Starting thread on core 3 with urgent priority queue 00:15:39.822 Starting thread on core 0 with urgent priority queue 00:15:39.822 SPDK bdev Controller (SPDK2 ) core 0: 676.67 IO/s 147.78 secs/100000 ios 00:15:39.822 SPDK bdev Controller (SPDK2 ) core 1: 717.33 IO/s 139.41 secs/100000 ios 00:15:39.822 SPDK bdev Controller (SPDK2 ) core 2: 1022.00 IO/s 97.85 secs/100000 ios 00:15:39.822 SPDK bdev Controller (SPDK2 ) core 3: 621.67 IO/s 160.86 secs/100000 ios 00:15:39.822 ======================================================== 00:15:39.822 00:15:39.822 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:40.080 [2024-10-14 16:40:44.459050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.080 Initializing NVMe Controllers 00:15:40.080 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.080 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:40.080 Namespace ID: 1 size: 0GB 00:15:40.080 Initialization complete. 00:15:40.080 INFO: using host memory buffer for IO 00:15:40.080 Hello world! 00:15:40.080 [2024-10-14 16:40:44.471129] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.080 16:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:40.337 [2024-10-14 16:40:44.734157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.269 Initializing NVMe Controllers 00:15:41.269 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.269 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.269 Initialization complete. Launching workers. 00:15:41.269 submit (in ns) avg, min, max = 6777.6, 3201.0, 3999324.8 00:15:41.269 complete (in ns) avg, min, max = 20138.4, 1766.7, 4069397.1 00:15:41.269 00:15:41.269 Submit histogram 00:15:41.269 ================ 00:15:41.269 Range in us Cumulative Count 00:15:41.269 3.200 - 3.215: 0.1387% ( 23) 00:15:41.269 3.215 - 3.230: 0.7658% ( 104) 00:15:41.269 3.230 - 3.246: 1.8634% ( 182) 00:15:41.269 3.246 - 3.261: 4.0704% ( 366) 00:15:41.269 3.261 - 3.276: 8.5992% ( 751) 00:15:41.269 3.276 - 3.291: 14.6234% ( 999) 00:15:41.269 3.291 - 3.307: 20.9009% ( 1041) 00:15:41.269 3.307 - 3.322: 27.4076% ( 1079) 00:15:41.269 3.322 - 3.337: 33.6851% ( 1041) 00:15:41.269 3.337 - 3.352: 39.1123% ( 900) 00:15:41.269 3.352 - 3.368: 44.9135% ( 962) 00:15:41.269 3.368 - 3.383: 51.1488% ( 1034) 00:15:41.269 3.383 - 3.398: 56.2624% ( 848) 00:15:41.269 3.398 - 3.413: 61.5932% ( 884) 00:15:41.269 3.413 - 3.429: 69.0707% ( 1240) 00:15:41.269 3.429 - 3.444: 73.9613% ( 811) 00:15:41.269 3.444 - 3.459: 78.5865% ( 767) 00:15:41.269 3.459 - 3.474: 82.5605% ( 659) 00:15:41.269 3.474 - 3.490: 85.2982% ( 454) 00:15:41.269 3.490 - 3.505: 86.8299% ( 254) 00:15:41.269 3.505 - 3.520: 87.6922% ( 143) 00:15:41.269 3.520 - 3.535: 88.1686% ( 79) 00:15:41.269 3.535 - 3.550: 88.5968% ( 71) 00:15:41.269 3.550 - 3.566: 89.1817% ( 97) 00:15:41.269 3.566 - 3.581: 89.9596% ( 129) 00:15:41.269 3.581 - 3.596: 90.7254% ( 127) 00:15:41.269 3.596 - 3.611: 91.5697% ( 140) 00:15:41.269 3.611 - 3.627: 92.4139% ( 140) 00:15:41.269 3.627 - 3.642: 93.3607% ( 157) 00:15:41.269 3.642 - 3.657: 94.2351% ( 145) 00:15:41.269 3.657 - 3.672: 95.2964% ( 176) 00:15:41.269 3.672 - 3.688: 96.2793% ( 163) 00:15:41.269 3.688 - 3.703: 97.0994% ( 136) 00:15:41.269 3.703 - 3.718: 97.7809% ( 113) 00:15:41.269 3.718 - 3.733: 98.3176% ( 89) 00:15:41.269 3.733 - 3.749: 98.6975% ( 63) 00:15:41.269 3.749 - 3.764: 98.9929% ( 49) 00:15:41.269 3.764 - 3.779: 99.2161% ( 37) 00:15:41.269 3.779 - 3.794: 99.3970% ( 30) 00:15:41.269 3.794 - 3.810: 99.5115% ( 19) 00:15:41.269 3.810 - 3.825: 99.6020% ( 15) 00:15:41.269 3.825 - 3.840: 99.6261% ( 4) 00:15:41.269 3.840 - 3.855: 99.6502% ( 4) 00:15:41.269 3.855 - 3.870: 99.6563% ( 1) 00:15:41.269 3.886 - 3.901: 99.6623% ( 1) 00:15:41.269 3.901 - 3.931: 99.6683% ( 1) 00:15:41.269 4.663 - 4.693: 99.6744% ( 1) 00:15:41.269 4.815 - 4.846: 99.6804% ( 1) 00:15:41.269 4.846 - 4.876: 99.6925% ( 2) 00:15:41.269 4.876 - 4.907: 99.6985% ( 1) 00:15:41.269 4.937 - 4.968: 99.7045% ( 1) 00:15:41.269 4.968 - 4.998: 99.7105% ( 1) 00:15:41.269 5.029 - 5.059: 99.7166% ( 1) 00:15:41.269 5.181 - 5.211: 99.7286% ( 2) 00:15:41.269 5.303 - 5.333: 99.7347% ( 1) 00:15:41.269 5.364 - 5.394: 99.7407% ( 1) 00:15:41.269 5.425 - 5.455: 99.7467% ( 1) 00:15:41.269 5.455 - 5.486: 99.7648% ( 3) 00:15:41.269 5.547 - 5.577: 99.7769% ( 2) 00:15:41.269 5.608 - 5.638: 99.7829% ( 1) 00:15:41.269 5.669 - 5.699: 99.7889% ( 1) 00:15:41.269 5.851 - 5.882: 99.7950% ( 1) 00:15:41.269 5.882 - 5.912: 99.8010% ( 1) 00:15:41.269 5.943 - 5.973: 99.8070% ( 1) 00:15:41.269 5.973 - 6.004: 99.8131% ( 1) 00:15:41.269 6.004 - 6.034: 99.8191% ( 1) 00:15:41.269 6.095 - 6.126: 99.8251% ( 1) 00:15:41.269 6.126 - 6.156: 99.8312% ( 1) 00:15:41.269 6.156 - 6.187: 99.8372% ( 1) 00:15:41.269 6.370 - 6.400: 99.8432% ( 1) 00:15:41.269 6.400 - 6.430: 99.8492% ( 1) 00:15:41.269 6.430 - 6.461: 99.8553% ( 1) 00:15:41.269 6.522 - 6.552: 99.8613% ( 1) 00:15:41.269 6.552 - 6.583: 99.8673% ( 1) 00:15:41.269 6.583 - 6.613: 99.8734% ( 1) 00:15:41.269 6.644 - 6.674: 99.8794% ( 1) 00:15:41.269 6.827 - 6.857: 99.8854% ( 1) 00:15:41.269 7.284 - 7.314: 99.8915% ( 1) 00:15:41.269 7.467 - 7.497: 99.8975% ( 1) 00:15:41.269 [2024-10-14 16:40:45.827576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:41.269 7.985 - 8.046: 99.9035% ( 1) 00:15:41.269 8.594 - 8.655: 99.9095% ( 1) 00:15:41.269 10.606 - 10.667: 99.9156% ( 1) 00:15:41.269 3994.575 - 4025.783: 100.0000% ( 14) 00:15:41.269 00:15:41.269 Complete histogram 00:15:41.269 ================== 00:15:41.269 Range in us Cumulative Count 00:15:41.269 1.760 - 1.768: 0.0060% ( 1) 00:15:41.269 1.768 - 1.775: 0.0302% ( 4) 00:15:41.269 1.775 - 1.783: 0.1809% ( 25) 00:15:41.269 1.783 - 1.790: 0.6995% ( 86) 00:15:41.269 1.790 - 1.798: 1.5860% ( 147) 00:15:41.269 1.798 - 1.806: 3.1719% ( 263) 00:15:41.269 1.806 - 1.813: 10.9691% ( 1293) 00:15:41.269 1.813 - 1.821: 37.3756% ( 4379) 00:15:41.269 1.821 - 1.829: 69.8305% ( 5382) 00:15:41.269 1.829 - 1.836: 85.3042% ( 2566) 00:15:41.269 1.836 - 1.844: 90.3576% ( 838) 00:15:41.269 1.844 - 1.851: 93.3727% ( 500) 00:15:41.269 1.851 - 1.859: 95.2964% ( 319) 00:15:41.269 1.859 - 1.867: 96.0140% ( 119) 00:15:41.269 1.867 - 1.874: 96.3758% ( 60) 00:15:41.269 1.874 - 1.882: 96.7617% ( 64) 00:15:41.269 1.882 - 1.890: 97.2321% ( 78) 00:15:41.269 1.890 - 1.897: 97.9135% ( 113) 00:15:41.269 1.897 - 1.905: 98.5648% ( 108) 00:15:41.269 1.905 - 1.912: 98.9749% ( 68) 00:15:41.269 1.912 - 1.920: 99.1437% ( 28) 00:15:41.269 1.920 - 1.928: 99.2583% ( 19) 00:15:41.269 1.928 - 1.935: 99.3125% ( 9) 00:15:41.269 1.935 - 1.943: 99.3427% ( 5) 00:15:41.269 1.943 - 1.950: 99.3668% ( 4) 00:15:41.269 1.950 - 1.966: 99.3729% ( 1) 00:15:41.269 1.966 - 1.981: 99.3789% ( 1) 00:15:41.269 1.981 - 1.996: 99.3909% ( 2) 00:15:41.269 1.996 - 2.011: 99.3970% ( 1) 00:15:41.269 2.011 - 2.027: 99.4090% ( 2) 00:15:41.269 2.088 - 2.103: 99.4151% ( 1) 00:15:41.269 2.149 - 2.164: 99.4211% ( 1) 00:15:41.269 3.322 - 3.337: 99.4271% ( 1) 00:15:41.269 3.490 - 3.505: 99.4332% ( 1) 00:15:41.269 3.520 - 3.535: 99.4392% ( 1) 00:15:41.269 3.611 - 3.627: 99.4452% ( 1) 00:15:41.269 3.627 - 3.642: 99.4512% ( 1) 00:15:41.269 3.642 - 3.657: 99.4573% ( 1) 00:15:41.269 3.703 - 3.718: 99.4633% ( 1) 00:15:41.269 3.718 - 3.733: 99.4693% ( 1) 00:15:41.269 3.962 - 3.992: 99.4754% ( 1) 00:15:41.269 4.632 - 4.663: 99.4814% ( 1) 00:15:41.269 4.785 - 4.815: 99.4874% ( 1) 00:15:41.269 4.876 - 4.907: 99.4935% ( 1) 00:15:41.269 5.455 - 5.486: 99.4995% ( 1) 00:15:41.269 5.547 - 5.577: 99.5055% ( 1) 00:15:41.269 5.790 - 5.821: 99.5115% ( 1) 00:15:41.269 5.912 - 5.943: 99.5176% ( 1) 00:15:41.269 6.034 - 6.065: 99.5236% ( 1) 00:15:41.269 6.126 - 6.156: 99.5296% ( 1) 00:15:41.269 7.436 - 7.467: 99.5357% ( 1) 00:15:41.269 8.472 - 8.533: 99.5417% ( 1) 00:15:41.269 3978.971 - 3994.575: 99.5477% ( 1) 00:15:41.269 3994.575 - 4025.783: 99.9940% ( 74) 00:15:41.270 4056.990 - 4088.198: 100.0000% ( 1) 00:15:41.270 00:15:41.270 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:41.270 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:41.270 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:41.270 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:41.270 16:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:41.527 [ 00:15:41.527 { 00:15:41.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:41.527 "subtype": "Discovery", 00:15:41.527 "listen_addresses": [], 00:15:41.527 "allow_any_host": true, 00:15:41.527 "hosts": [] 00:15:41.527 }, 00:15:41.527 { 00:15:41.527 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:41.527 "subtype": "NVMe", 00:15:41.527 "listen_addresses": [ 00:15:41.527 { 00:15:41.527 "trtype": "VFIOUSER", 00:15:41.527 "adrfam": "IPv4", 00:15:41.527 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:41.527 "trsvcid": "0" 00:15:41.527 } 00:15:41.527 ], 00:15:41.527 "allow_any_host": true, 00:15:41.527 "hosts": [], 00:15:41.527 "serial_number": "SPDK1", 00:15:41.527 "model_number": "SPDK bdev Controller", 00:15:41.527 "max_namespaces": 32, 00:15:41.527 "min_cntlid": 1, 00:15:41.527 "max_cntlid": 65519, 00:15:41.527 "namespaces": [ 00:15:41.527 { 00:15:41.527 "nsid": 1, 00:15:41.527 "bdev_name": "Malloc1", 00:15:41.527 "name": "Malloc1", 00:15:41.527 "nguid": "4074A5F035624C36A87E8EB0920FD5DE", 00:15:41.527 "uuid": "4074a5f0-3562-4c36-a87e-8eb0920fd5de" 00:15:41.527 }, 00:15:41.527 { 00:15:41.527 "nsid": 2, 00:15:41.527 "bdev_name": "Malloc3", 00:15:41.527 "name": "Malloc3", 00:15:41.527 "nguid": "97813ED214F3409389C83220AC56D0BD", 00:15:41.527 "uuid": "97813ed2-14f3-4093-89c8-3220ac56d0bd" 00:15:41.527 } 00:15:41.527 ] 00:15:41.527 }, 00:15:41.527 { 00:15:41.527 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:41.527 "subtype": "NVMe", 00:15:41.527 "listen_addresses": [ 00:15:41.527 { 00:15:41.527 "trtype": "VFIOUSER", 00:15:41.527 "adrfam": "IPv4", 00:15:41.527 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:41.527 "trsvcid": "0" 00:15:41.527 } 00:15:41.527 ], 00:15:41.527 "allow_any_host": true, 00:15:41.527 "hosts": [], 00:15:41.527 "serial_number": "SPDK2", 00:15:41.527 "model_number": "SPDK bdev Controller", 00:15:41.527 "max_namespaces": 32, 00:15:41.527 "min_cntlid": 1, 00:15:41.527 "max_cntlid": 65519, 00:15:41.527 "namespaces": [ 00:15:41.527 { 00:15:41.527 "nsid": 1, 00:15:41.527 "bdev_name": "Malloc2", 00:15:41.527 "name": "Malloc2", 00:15:41.527 "nguid": "2007F78EBB004D1381EF0D04366F266D", 00:15:41.527 "uuid": "2007f78e-bb00-4d13-81ef-0d04366f266d" 00:15:41.527 } 00:15:41.527 ] 00:15:41.527 } 00:15:41.527 ] 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=513983 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:41.527 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:41.785 [2024-10-14 16:40:46.224025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:41.785 Malloc4 00:15:41.785 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:42.043 [2024-10-14 16:40:46.468905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:42.043 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.043 Asynchronous Event Request test 00:15:42.043 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.043 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.043 Registering asynchronous event callbacks... 00:15:42.043 Starting namespace attribute notice tests for all controllers... 00:15:42.043 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:42.043 aer_cb - Changed Namespace 00:15:42.043 Cleaning up... 00:15:42.043 [ 00:15:42.043 { 00:15:42.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.043 "subtype": "Discovery", 00:15:42.043 "listen_addresses": [], 00:15:42.043 "allow_any_host": true, 00:15:42.043 "hosts": [] 00:15:42.043 }, 00:15:42.043 { 00:15:42.043 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.043 "subtype": "NVMe", 00:15:42.043 "listen_addresses": [ 00:15:42.043 { 00:15:42.043 "trtype": "VFIOUSER", 00:15:42.043 "adrfam": "IPv4", 00:15:42.043 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.043 "trsvcid": "0" 00:15:42.043 } 00:15:42.043 ], 00:15:42.043 "allow_any_host": true, 00:15:42.043 "hosts": [], 00:15:42.043 "serial_number": "SPDK1", 00:15:42.043 "model_number": "SPDK bdev Controller", 00:15:42.043 "max_namespaces": 32, 00:15:42.043 "min_cntlid": 1, 00:15:42.043 "max_cntlid": 65519, 00:15:42.043 "namespaces": [ 00:15:42.043 { 00:15:42.043 "nsid": 1, 00:15:42.043 "bdev_name": "Malloc1", 00:15:42.043 "name": "Malloc1", 00:15:42.043 "nguid": "4074A5F035624C36A87E8EB0920FD5DE", 00:15:42.043 "uuid": "4074a5f0-3562-4c36-a87e-8eb0920fd5de" 00:15:42.043 }, 00:15:42.043 { 00:15:42.043 "nsid": 2, 00:15:42.043 "bdev_name": "Malloc3", 00:15:42.043 "name": "Malloc3", 00:15:42.043 "nguid": "97813ED214F3409389C83220AC56D0BD", 00:15:42.043 "uuid": "97813ed2-14f3-4093-89c8-3220ac56d0bd" 00:15:42.043 } 00:15:42.043 ] 00:15:42.043 }, 00:15:42.043 { 00:15:42.043 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.043 "subtype": "NVMe", 00:15:42.043 "listen_addresses": [ 00:15:42.043 { 00:15:42.043 "trtype": "VFIOUSER", 00:15:42.043 "adrfam": "IPv4", 00:15:42.043 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.043 "trsvcid": "0" 00:15:42.043 } 00:15:42.043 ], 00:15:42.043 "allow_any_host": true, 00:15:42.043 "hosts": [], 00:15:42.043 "serial_number": "SPDK2", 00:15:42.043 "model_number": "SPDK bdev Controller", 00:15:42.043 "max_namespaces": 32, 00:15:42.043 "min_cntlid": 1, 00:15:42.043 "max_cntlid": 65519, 00:15:42.043 "namespaces": [ 00:15:42.043 { 00:15:42.043 "nsid": 1, 00:15:42.043 "bdev_name": "Malloc2", 00:15:42.043 "name": "Malloc2", 00:15:42.043 "nguid": "2007F78EBB004D1381EF0D04366F266D", 00:15:42.043 "uuid": "2007f78e-bb00-4d13-81ef-0d04366f266d" 00:15:42.043 }, 00:15:42.043 { 00:15:42.043 "nsid": 2, 00:15:42.043 "bdev_name": "Malloc4", 00:15:42.043 "name": "Malloc4", 00:15:42.043 "nguid": "9FB22DBBB14449B2A93968BF56020525", 00:15:42.043 "uuid": "9fb22dbb-b144-49b2-a939-68bf56020525" 00:15:42.043 } 00:15:42.043 ] 00:15:42.043 } 00:15:42.043 ] 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 513983 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 506291 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 506291 ']' 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 506291 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 506291 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 506291' 00:15:42.302 killing process with pid 506291 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 506291 00:15:42.302 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 506291 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=514148 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 514148' 00:15:42.560 Process pid: 514148 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 514148 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 514148 ']' 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.560 16:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:42.560 [2024-10-14 16:40:47.028096] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:42.560 [2024-10-14 16:40:47.028967] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:15:42.560 [2024-10-14 16:40:47.029007] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.560 [2024-10-14 16:40:47.096752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.560 [2024-10-14 16:40:47.133388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.560 [2024-10-14 16:40:47.133425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.560 [2024-10-14 16:40:47.133434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.560 [2024-10-14 16:40:47.133441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.560 [2024-10-14 16:40:47.133447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.560 [2024-10-14 16:40:47.135061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.560 [2024-10-14 16:40:47.135167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.560 [2024-10-14 16:40:47.135277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.560 [2024-10-14 16:40:47.135278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.819 [2024-10-14 16:40:47.203098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:42.819 [2024-10-14 16:40:47.204117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:42.819 [2024-10-14 16:40:47.204340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:42.819 [2024-10-14 16:40:47.204952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:42.819 [2024-10-14 16:40:47.204986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:42.819 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.819 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:42.819 16:40:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:43.754 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:44.013 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:44.013 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:44.013 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.013 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:44.013 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:44.013 Malloc1 00:15:44.272 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:44.272 16:40:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:44.531 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:44.789 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:44.789 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:44.789 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:45.047 Malloc2 00:15:45.047 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:45.305 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:45.305 16:40:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:45.563 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:45.563 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 514148 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 514148 ']' 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 514148 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 514148 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 514148' 00:15:45.564 killing process with pid 514148 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 514148 00:15:45.564 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 514148 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:45.822 00:15:45.822 real 0m50.830s 00:15:45.822 user 3m16.521s 00:15:45.822 sys 0m3.329s 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:45.822 ************************************ 00:15:45.822 END TEST nvmf_vfio_user 00:15:45.822 ************************************ 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.822 ************************************ 00:15:45.822 START TEST nvmf_vfio_user_nvme_compliance 00:15:45.822 ************************************ 00:15:45.822 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:46.082 * Looking for test storage... 00:15:46.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.082 --rc genhtml_branch_coverage=1 00:15:46.082 --rc genhtml_function_coverage=1 00:15:46.082 --rc genhtml_legend=1 00:15:46.082 --rc geninfo_all_blocks=1 00:15:46.082 --rc geninfo_unexecuted_blocks=1 00:15:46.082 00:15:46.082 ' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.082 --rc genhtml_branch_coverage=1 00:15:46.082 --rc genhtml_function_coverage=1 00:15:46.082 --rc genhtml_legend=1 00:15:46.082 --rc geninfo_all_blocks=1 00:15:46.082 --rc geninfo_unexecuted_blocks=1 00:15:46.082 00:15:46.082 ' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.082 --rc genhtml_branch_coverage=1 00:15:46.082 --rc genhtml_function_coverage=1 00:15:46.082 --rc genhtml_legend=1 00:15:46.082 --rc geninfo_all_blocks=1 00:15:46.082 --rc geninfo_unexecuted_blocks=1 00:15:46.082 00:15:46.082 ' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:46.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.082 --rc genhtml_branch_coverage=1 00:15:46.082 --rc genhtml_function_coverage=1 00:15:46.082 --rc genhtml_legend=1 00:15:46.082 --rc geninfo_all_blocks=1 00:15:46.082 --rc geninfo_unexecuted_blocks=1 00:15:46.082 00:15:46.082 ' 00:15:46.082 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=514909 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 514909' 00:15:46.083 Process pid: 514909 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 514909 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 514909 ']' 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.083 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.083 [2024-10-14 16:40:50.666577] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:15:46.083 [2024-10-14 16:40:50.666630] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.342 [2024-10-14 16:40:50.735307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.342 [2024-10-14 16:40:50.777041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.342 [2024-10-14 16:40:50.777075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.342 [2024-10-14 16:40:50.777085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.342 [2024-10-14 16:40:50.777092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.342 [2024-10-14 16:40:50.777098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.342 [2024-10-14 16:40:50.778509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.342 [2024-10-14 16:40:50.778544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.342 [2024-10-14 16:40:50.778545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.342 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.342 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:46.342 16:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.277 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.535 malloc0 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.535 16:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:47.535 00:15:47.535 00:15:47.535 CUnit - A unit testing framework for C - Version 2.1-3 00:15:47.535 http://cunit.sourceforge.net/ 00:15:47.535 00:15:47.535 00:15:47.535 Suite: nvme_compliance 00:15:47.535 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-14 16:40:52.111040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.535 [2024-10-14 16:40:52.112412] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:47.535 [2024-10-14 16:40:52.112426] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:47.535 [2024-10-14 16:40:52.112432] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:47.535 [2024-10-14 16:40:52.114065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.535 passed 00:15:47.793 Test: admin_identify_ctrlr_verify_fused ...[2024-10-14 16:40:52.192626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.793 [2024-10-14 16:40:52.195639] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.793 passed 00:15:47.793 Test: admin_identify_ns ...[2024-10-14 16:40:52.274291] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:47.793 [2024-10-14 16:40:52.334611] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:47.793 [2024-10-14 16:40:52.342615] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:47.793 [2024-10-14 16:40:52.363697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:47.793 passed 00:15:48.051 Test: admin_get_features_mandatory_features ...[2024-10-14 16:40:52.437456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.052 [2024-10-14 16:40:52.442483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.052 passed 00:15:48.052 Test: admin_get_features_optional_features ...[2024-10-14 16:40:52.516986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.052 [2024-10-14 16:40:52.520004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.052 passed 00:15:48.052 Test: admin_set_features_number_of_queues ...[2024-10-14 16:40:52.595788] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.310 [2024-10-14 16:40:52.704696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.310 passed 00:15:48.310 Test: admin_get_log_page_mandatory_logs ...[2024-10-14 16:40:52.778261] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.310 [2024-10-14 16:40:52.781284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.310 passed 00:15:48.310 Test: admin_get_log_page_with_lpo ...[2024-10-14 16:40:52.856933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.310 [2024-10-14 16:40:52.925609] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:48.310 [2024-10-14 16:40:52.938653] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.568 passed 00:15:48.568 Test: fabric_property_get ...[2024-10-14 16:40:53.012421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.568 [2024-10-14 16:40:53.013684] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:48.568 [2024-10-14 16:40:53.015446] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.568 passed 00:15:48.568 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-14 16:40:53.092957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.568 [2024-10-14 16:40:53.094193] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:48.568 [2024-10-14 16:40:53.095977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.568 passed 00:15:48.568 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-14 16:40:53.172674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.825 [2024-10-14 16:40:53.258616] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.825 [2024-10-14 16:40:53.274617] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:48.825 [2024-10-14 16:40:53.279688] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.825 passed 00:15:48.825 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-14 16:40:53.352413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:48.825 [2024-10-14 16:40:53.355822] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:48.825 [2024-10-14 16:40:53.357439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:48.826 passed 00:15:48.826 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-14 16:40:53.431772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.083 [2024-10-14 16:40:53.507610] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:49.083 [2024-10-14 16:40:53.531807] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:49.083 [2024-10-14 16:40:53.536883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.083 passed 00:15:49.083 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-14 16:40:53.613115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.083 [2024-10-14 16:40:53.614348] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:49.083 [2024-10-14 16:40:53.614375] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:49.083 [2024-10-14 16:40:53.616138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.083 passed 00:15:49.083 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-14 16:40:53.693784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.341 [2024-10-14 16:40:53.786611] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:49.341 [2024-10-14 16:40:53.794635] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:49.341 [2024-10-14 16:40:53.802608] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:49.341 [2024-10-14 16:40:53.810607] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:49.341 [2024-10-14 16:40:53.839684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.341 passed 00:15:49.341 Test: admin_create_io_sq_verify_pc ...[2024-10-14 16:40:53.915210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:49.341 [2024-10-14 16:40:53.935615] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:49.341 [2024-10-14 16:40:53.953404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:49.599 passed 00:15:49.599 Test: admin_create_io_qp_max_qps ...[2024-10-14 16:40:54.030922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:50.533 [2024-10-14 16:40:55.144610] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:51.098 [2024-10-14 16:40:55.532113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.098 passed 00:15:51.098 Test: admin_create_io_sq_shared_cq ...[2024-10-14 16:40:55.608913] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:51.356 [2024-10-14 16:40:55.740613] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.356 [2024-10-14 16:40:55.776681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:51.356 passed 00:15:51.356 00:15:51.356 Run Summary: Type Total Ran Passed Failed Inactive 00:15:51.356 suites 1 1 n/a 0 0 00:15:51.356 tests 18 18 18 0 0 00:15:51.356 asserts 360 360 360 0 n/a 00:15:51.356 00:15:51.356 Elapsed time = 1.508 seconds 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 514909 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 514909 ']' 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 514909 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 514909 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 514909' 00:15:51.356 killing process with pid 514909 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 514909 00:15:51.356 16:40:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 514909 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:51.615 00:15:51.615 real 0m5.643s 00:15:51.615 user 0m15.737s 00:15:51.615 sys 0m0.537s 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.615 ************************************ 00:15:51.615 END TEST nvmf_vfio_user_nvme_compliance 00:15:51.615 ************************************ 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.615 ************************************ 00:15:51.615 START TEST nvmf_vfio_user_fuzz 00:15:51.615 ************************************ 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:51.615 * Looking for test storage... 00:15:51.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:51.615 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:51.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.874 --rc genhtml_branch_coverage=1 00:15:51.874 --rc genhtml_function_coverage=1 00:15:51.874 --rc genhtml_legend=1 00:15:51.874 --rc geninfo_all_blocks=1 00:15:51.874 --rc geninfo_unexecuted_blocks=1 00:15:51.874 00:15:51.874 ' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:51.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.874 --rc genhtml_branch_coverage=1 00:15:51.874 --rc genhtml_function_coverage=1 00:15:51.874 --rc genhtml_legend=1 00:15:51.874 --rc geninfo_all_blocks=1 00:15:51.874 --rc geninfo_unexecuted_blocks=1 00:15:51.874 00:15:51.874 ' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:51.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.874 --rc genhtml_branch_coverage=1 00:15:51.874 --rc genhtml_function_coverage=1 00:15:51.874 --rc genhtml_legend=1 00:15:51.874 --rc geninfo_all_blocks=1 00:15:51.874 --rc geninfo_unexecuted_blocks=1 00:15:51.874 00:15:51.874 ' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:51.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.874 --rc genhtml_branch_coverage=1 00:15:51.874 --rc genhtml_function_coverage=1 00:15:51.874 --rc genhtml_legend=1 00:15:51.874 --rc geninfo_all_blocks=1 00:15:51.874 --rc geninfo_unexecuted_blocks=1 00:15:51.874 00:15:51.874 ' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.874 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=515892 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 515892' 00:15:51.875 Process pid: 515892 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 515892 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 515892 ']' 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.875 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.133 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.133 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:52.133 16:40:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.068 malloc0 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:53.068 16:40:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:25.140 Fuzzing completed. Shutting down the fuzz application 00:16:25.140 00:16:25.140 Dumping successful admin opcodes: 00:16:25.140 8, 9, 10, 24, 00:16:25.140 Dumping successful io opcodes: 00:16:25.140 0, 00:16:25.140 NS: 0x20000081ef00 I/O qp, Total commands completed: 1006425, total successful commands: 3947, random_seed: 3246116480 00:16:25.140 NS: 0x20000081ef00 admin qp, Total commands completed: 173812, total successful commands: 1411, random_seed: 1419251840 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 515892 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 515892 ']' 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 515892 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.140 16:41:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 515892 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 515892' 00:16:25.140 killing process with pid 515892 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 515892 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 515892 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:25.140 00:16:25.140 real 0m32.191s 00:16:25.140 user 0m35.481s 00:16:25.140 sys 0m25.318s 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.140 ************************************ 00:16:25.140 END TEST nvmf_vfio_user_fuzz 00:16:25.140 ************************************ 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.140 ************************************ 00:16:25.140 START TEST nvmf_auth_target 00:16:25.140 ************************************ 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:25.140 * Looking for test storage... 00:16:25.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:25.140 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.141 --rc genhtml_branch_coverage=1 00:16:25.141 --rc genhtml_function_coverage=1 00:16:25.141 --rc genhtml_legend=1 00:16:25.141 --rc geninfo_all_blocks=1 00:16:25.141 --rc geninfo_unexecuted_blocks=1 00:16:25.141 00:16:25.141 ' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.141 --rc genhtml_branch_coverage=1 00:16:25.141 --rc genhtml_function_coverage=1 00:16:25.141 --rc genhtml_legend=1 00:16:25.141 --rc geninfo_all_blocks=1 00:16:25.141 --rc geninfo_unexecuted_blocks=1 00:16:25.141 00:16:25.141 ' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.141 --rc genhtml_branch_coverage=1 00:16:25.141 --rc genhtml_function_coverage=1 00:16:25.141 --rc genhtml_legend=1 00:16:25.141 --rc geninfo_all_blocks=1 00:16:25.141 --rc geninfo_unexecuted_blocks=1 00:16:25.141 00:16:25.141 ' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.141 --rc genhtml_branch_coverage=1 00:16:25.141 --rc genhtml_function_coverage=1 00:16:25.141 --rc genhtml_legend=1 00:16:25.141 --rc geninfo_all_blocks=1 00:16:25.141 --rc geninfo_unexecuted_blocks=1 00:16:25.141 00:16:25.141 ' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.141 16:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:30.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:30.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:30.414 Found net devices under 0000:86:00.0: cvl_0_0 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:30.414 Found net devices under 0000:86:00.1: cvl_0_1 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.414 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:16:30.415 00:16:30.415 --- 10.0.0.2 ping statistics --- 00:16:30.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.415 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:16:30.415 00:16:30.415 --- 10.0.0.1 ping statistics --- 00:16:30.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.415 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=524201 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 524201 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 524201 ']' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=524221 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d806052ad8b9f2105f21a652a263fb436f05a9443ba78b32 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.cZO 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d806052ad8b9f2105f21a652a263fb436f05a9443ba78b32 0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d806052ad8b9f2105f21a652a263fb436f05a9443ba78b32 0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d806052ad8b9f2105f21a652a263fb436f05a9443ba78b32 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.cZO 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.cZO 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cZO 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f0d1379281f7418b5a6dabbd018dceac53beec054843b276369f09fea0f6b422 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.DSc 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f0d1379281f7418b5a6dabbd018dceac53beec054843b276369f09fea0f6b422 3 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f0d1379281f7418b5a6dabbd018dceac53beec054843b276369f09fea0f6b422 3 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f0d1379281f7418b5a6dabbd018dceac53beec054843b276369f09fea0f6b422 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.DSc 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.DSc 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.DSc 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1a9a0100867dbc36a0083e9bb88c48da 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.BHL 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1a9a0100867dbc36a0083e9bb88c48da 1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1a9a0100867dbc36a0083e9bb88c48da 1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1a9a0100867dbc36a0083e9bb88c48da 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.415 16:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.BHL 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.BHL 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.BHL 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=90d1fd7efe8af80a7dab0dcdeac7ff9727f74421cfbf438b 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.1k4 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 90d1fd7efe8af80a7dab0dcdeac7ff9727f74421cfbf438b 2 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 90d1fd7efe8af80a7dab0dcdeac7ff9727f74421cfbf438b 2 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=90d1fd7efe8af80a7dab0dcdeac7ff9727f74421cfbf438b 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:30.415 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.1k4 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.1k4 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1k4 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:30.674 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e823b86da0706ef9d8d0d4b156f8ba6ce72ee85da685e129 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.sOd 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e823b86da0706ef9d8d0d4b156f8ba6ce72ee85da685e129 2 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e823b86da0706ef9d8d0d4b156f8ba6ce72ee85da685e129 2 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e823b86da0706ef9d8d0d4b156f8ba6ce72ee85da685e129 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.sOd 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.sOd 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.sOd 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=896734481f0e4b065860383de367ec02 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.YH7 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 896734481f0e4b065860383de367ec02 1 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 896734481f0e4b065860383de367ec02 1 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=896734481f0e4b065860383de367ec02 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.YH7 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.YH7 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.YH7 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1eb771aeceade6de7a1784041fd22b553093fd4b137e8ad91683ff34c548e300 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.RuL 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1eb771aeceade6de7a1784041fd22b553093fd4b137e8ad91683ff34c548e300 3 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1eb771aeceade6de7a1784041fd22b553093fd4b137e8ad91683ff34c548e300 3 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1eb771aeceade6de7a1784041fd22b553093fd4b137e8ad91683ff34c548e300 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.RuL 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.RuL 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.RuL 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 524201 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 524201 ']' 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.675 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 524221 /var/tmp/host.sock 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 524221 ']' 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:30.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.933 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cZO 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cZO 00:16:31.192 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cZO 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.DSc ]] 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DSc 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DSc 00:16:31.450 16:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DSc 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BHL 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BHL 00:16:31.450 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BHL 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1k4 ]] 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1k4 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1k4 00:16:31.709 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1k4 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sOd 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.sOd 00:16:31.967 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.sOd 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.YH7 ]] 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YH7 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YH7 00:16:32.226 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YH7 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RuL 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.RuL 00:16:32.484 16:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.RuL 00:16:32.484 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:32.484 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:32.484 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.484 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.484 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.484 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.743 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.002 00:16:33.002 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.002 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.002 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.261 { 00:16:33.261 "cntlid": 1, 00:16:33.261 "qid": 0, 00:16:33.261 "state": "enabled", 00:16:33.261 "thread": "nvmf_tgt_poll_group_000", 00:16:33.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.261 "listen_address": { 00:16:33.261 "trtype": "TCP", 00:16:33.261 "adrfam": "IPv4", 00:16:33.261 "traddr": "10.0.0.2", 00:16:33.261 "trsvcid": "4420" 00:16:33.261 }, 00:16:33.261 "peer_address": { 00:16:33.261 "trtype": "TCP", 00:16:33.261 "adrfam": "IPv4", 00:16:33.261 "traddr": "10.0.0.1", 00:16:33.261 "trsvcid": "53790" 00:16:33.261 }, 00:16:33.261 "auth": { 00:16:33.261 "state": "completed", 00:16:33.261 "digest": "sha256", 00:16:33.261 "dhgroup": "null" 00:16:33.261 } 00:16:33.261 } 00:16:33.261 ]' 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.261 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.524 16:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.524 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:33.524 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.098 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.356 16:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.614 00:16:34.614 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.614 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.614 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.873 { 00:16:34.873 "cntlid": 3, 00:16:34.873 "qid": 0, 00:16:34.873 "state": "enabled", 00:16:34.873 "thread": "nvmf_tgt_poll_group_000", 00:16:34.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.873 "listen_address": { 00:16:34.873 "trtype": "TCP", 00:16:34.873 "adrfam": "IPv4", 00:16:34.873 "traddr": "10.0.0.2", 00:16:34.873 "trsvcid": "4420" 00:16:34.873 }, 00:16:34.873 "peer_address": { 00:16:34.873 "trtype": "TCP", 00:16:34.873 "adrfam": "IPv4", 00:16:34.873 "traddr": "10.0.0.1", 00:16:34.873 "trsvcid": "53814" 00:16:34.873 }, 00:16:34.873 "auth": { 00:16:34.873 "state": "completed", 00:16:34.873 "digest": "sha256", 00:16:34.873 "dhgroup": "null" 00:16:34.873 } 00:16:34.873 } 00:16:34.873 ]' 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.873 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.131 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:35.131 16:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.698 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.957 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.215 00:16:36.215 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.215 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.215 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.473 { 00:16:36.473 "cntlid": 5, 00:16:36.473 "qid": 0, 00:16:36.473 "state": "enabled", 00:16:36.473 "thread": "nvmf_tgt_poll_group_000", 00:16:36.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.473 "listen_address": { 00:16:36.473 "trtype": "TCP", 00:16:36.473 "adrfam": "IPv4", 00:16:36.473 "traddr": "10.0.0.2", 00:16:36.473 "trsvcid": "4420" 00:16:36.473 }, 00:16:36.473 "peer_address": { 00:16:36.473 "trtype": "TCP", 00:16:36.473 "adrfam": "IPv4", 00:16:36.473 "traddr": "10.0.0.1", 00:16:36.473 "trsvcid": "53850" 00:16:36.473 }, 00:16:36.473 "auth": { 00:16:36.473 "state": "completed", 00:16:36.473 "digest": "sha256", 00:16:36.473 "dhgroup": "null" 00:16:36.473 } 00:16:36.473 } 00:16:36.473 ]' 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.473 16:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.473 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.473 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.474 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.731 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:36.731 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:37.297 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.298 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.556 16:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.556 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.814 { 00:16:37.814 "cntlid": 7, 00:16:37.814 "qid": 0, 00:16:37.814 "state": "enabled", 00:16:37.814 "thread": "nvmf_tgt_poll_group_000", 00:16:37.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:37.814 "listen_address": { 00:16:37.814 "trtype": "TCP", 00:16:37.814 "adrfam": "IPv4", 00:16:37.814 "traddr": "10.0.0.2", 00:16:37.814 "trsvcid": "4420" 00:16:37.814 }, 00:16:37.814 "peer_address": { 00:16:37.814 "trtype": "TCP", 00:16:37.814 "adrfam": "IPv4", 00:16:37.814 "traddr": "10.0.0.1", 00:16:37.814 "trsvcid": "53870" 00:16:37.814 }, 00:16:37.814 "auth": { 00:16:37.814 "state": "completed", 00:16:37.814 "digest": "sha256", 00:16:37.814 "dhgroup": "null" 00:16:37.814 } 00:16:37.814 } 00:16:37.814 ]' 00:16:37.814 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.073 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.331 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:38.331 16:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.898 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.899 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.899 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.899 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.899 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.158 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.158 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.158 00:16:39.417 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.417 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.417 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.417 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.417 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.417 16:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.417 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.417 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.417 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.417 { 00:16:39.417 "cntlid": 9, 00:16:39.417 "qid": 0, 00:16:39.417 "state": "enabled", 00:16:39.417 "thread": "nvmf_tgt_poll_group_000", 00:16:39.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.417 "listen_address": { 00:16:39.417 "trtype": "TCP", 00:16:39.417 "adrfam": "IPv4", 00:16:39.417 "traddr": "10.0.0.2", 00:16:39.417 "trsvcid": "4420" 00:16:39.417 }, 00:16:39.417 "peer_address": { 00:16:39.417 "trtype": "TCP", 00:16:39.417 "adrfam": "IPv4", 00:16:39.417 "traddr": "10.0.0.1", 00:16:39.417 "trsvcid": "34644" 00:16:39.417 }, 00:16:39.417 "auth": { 00:16:39.417 "state": "completed", 00:16:39.417 "digest": "sha256", 00:16:39.417 "dhgroup": "ffdhe2048" 00:16:39.417 } 00:16:39.417 } 00:16:39.417 ]' 00:16:39.417 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.675 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.675 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.675 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.675 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.675 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.676 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.676 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.934 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:39.934 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.501 16:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.501 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.502 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.502 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.502 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.760 00:16:40.760 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.760 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.760 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.019 { 00:16:41.019 "cntlid": 11, 00:16:41.019 "qid": 0, 00:16:41.019 "state": "enabled", 00:16:41.019 "thread": "nvmf_tgt_poll_group_000", 00:16:41.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.019 "listen_address": { 00:16:41.019 "trtype": "TCP", 00:16:41.019 "adrfam": "IPv4", 00:16:41.019 "traddr": "10.0.0.2", 00:16:41.019 "trsvcid": "4420" 00:16:41.019 }, 00:16:41.019 "peer_address": { 00:16:41.019 "trtype": "TCP", 00:16:41.019 "adrfam": "IPv4", 00:16:41.019 "traddr": "10.0.0.1", 00:16:41.019 "trsvcid": "34672" 00:16:41.019 }, 00:16:41.019 "auth": { 00:16:41.019 "state": "completed", 00:16:41.019 "digest": "sha256", 00:16:41.019 "dhgroup": "ffdhe2048" 00:16:41.019 } 00:16:41.019 } 00:16:41.019 ]' 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.019 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.278 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.278 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.278 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.278 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:41.278 16:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:41.844 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.844 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.844 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.844 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.844 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.844 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.845 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:41.845 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.103 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.361 00:16:42.362 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.362 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.362 16:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.620 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.620 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.620 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.620 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.620 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.620 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.620 { 00:16:42.620 "cntlid": 13, 00:16:42.620 "qid": 0, 00:16:42.620 "state": "enabled", 00:16:42.620 "thread": "nvmf_tgt_poll_group_000", 00:16:42.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.620 "listen_address": { 00:16:42.620 "trtype": "TCP", 00:16:42.620 "adrfam": "IPv4", 00:16:42.620 "traddr": "10.0.0.2", 00:16:42.620 "trsvcid": "4420" 00:16:42.620 }, 00:16:42.620 "peer_address": { 00:16:42.620 "trtype": "TCP", 00:16:42.620 "adrfam": "IPv4", 00:16:42.620 "traddr": "10.0.0.1", 00:16:42.620 "trsvcid": "34696" 00:16:42.620 }, 00:16:42.620 "auth": { 00:16:42.621 "state": "completed", 00:16:42.621 "digest": "sha256", 00:16:42.621 "dhgroup": "ffdhe2048" 00:16:42.621 } 00:16:42.621 } 00:16:42.621 ]' 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.621 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.879 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:42.879 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:43.446 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.447 16:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.706 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.964 00:16:43.964 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.964 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.964 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.223 { 00:16:44.223 "cntlid": 15, 00:16:44.223 "qid": 0, 00:16:44.223 "state": "enabled", 00:16:44.223 "thread": "nvmf_tgt_poll_group_000", 00:16:44.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:44.223 "listen_address": { 00:16:44.223 "trtype": "TCP", 00:16:44.223 "adrfam": "IPv4", 00:16:44.223 "traddr": "10.0.0.2", 00:16:44.223 "trsvcid": "4420" 00:16:44.223 }, 00:16:44.223 "peer_address": { 00:16:44.223 "trtype": "TCP", 00:16:44.223 "adrfam": "IPv4", 00:16:44.223 "traddr": "10.0.0.1", 00:16:44.223 "trsvcid": "34740" 00:16:44.223 }, 00:16:44.223 "auth": { 00:16:44.223 "state": "completed", 00:16:44.223 "digest": "sha256", 00:16:44.223 "dhgroup": "ffdhe2048" 00:16:44.223 } 00:16:44.223 } 00:16:44.223 ]' 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.223 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.530 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:44.530 16:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:45.164 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.164 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.164 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.164 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.165 16:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.423 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.681 { 00:16:45.681 "cntlid": 17, 00:16:45.681 "qid": 0, 00:16:45.681 "state": "enabled", 00:16:45.681 "thread": "nvmf_tgt_poll_group_000", 00:16:45.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.681 "listen_address": { 00:16:45.681 "trtype": "TCP", 00:16:45.681 "adrfam": "IPv4", 00:16:45.681 "traddr": "10.0.0.2", 00:16:45.681 "trsvcid": "4420" 00:16:45.681 }, 00:16:45.681 "peer_address": { 00:16:45.681 "trtype": "TCP", 00:16:45.681 "adrfam": "IPv4", 00:16:45.681 "traddr": "10.0.0.1", 00:16:45.681 "trsvcid": "34772" 00:16:45.681 }, 00:16:45.681 "auth": { 00:16:45.681 "state": "completed", 00:16:45.681 "digest": "sha256", 00:16:45.681 "dhgroup": "ffdhe3072" 00:16:45.681 } 00:16:45.681 } 00:16:45.681 ]' 00:16:45.681 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.939 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.198 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:46.198 16:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.764 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.023 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.282 { 00:16:47.282 "cntlid": 19, 00:16:47.282 "qid": 0, 00:16:47.282 "state": "enabled", 00:16:47.282 "thread": "nvmf_tgt_poll_group_000", 00:16:47.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.282 "listen_address": { 00:16:47.282 "trtype": "TCP", 00:16:47.282 "adrfam": "IPv4", 00:16:47.282 "traddr": "10.0.0.2", 00:16:47.282 "trsvcid": "4420" 00:16:47.282 }, 00:16:47.282 "peer_address": { 00:16:47.282 "trtype": "TCP", 00:16:47.282 "adrfam": "IPv4", 00:16:47.282 "traddr": "10.0.0.1", 00:16:47.282 "trsvcid": "34800" 00:16:47.282 }, 00:16:47.282 "auth": { 00:16:47.282 "state": "completed", 00:16:47.282 "digest": "sha256", 00:16:47.282 "dhgroup": "ffdhe3072" 00:16:47.282 } 00:16:47.282 } 00:16:47.282 ]' 00:16:47.282 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.540 16:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.798 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:47.798 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.364 16:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.622 00:16:48.622 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.622 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.622 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.881 { 00:16:48.881 "cntlid": 21, 00:16:48.881 "qid": 0, 00:16:48.881 "state": "enabled", 00:16:48.881 "thread": "nvmf_tgt_poll_group_000", 00:16:48.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.881 "listen_address": { 00:16:48.881 "trtype": "TCP", 00:16:48.881 "adrfam": "IPv4", 00:16:48.881 "traddr": "10.0.0.2", 00:16:48.881 "trsvcid": "4420" 00:16:48.881 }, 00:16:48.881 "peer_address": { 00:16:48.881 "trtype": "TCP", 00:16:48.881 "adrfam": "IPv4", 00:16:48.881 "traddr": "10.0.0.1", 00:16:48.881 "trsvcid": "34836" 00:16:48.881 }, 00:16:48.881 "auth": { 00:16:48.881 "state": "completed", 00:16:48.881 "digest": "sha256", 00:16:48.881 "dhgroup": "ffdhe3072" 00:16:48.881 } 00:16:48.881 } 00:16:48.881 ]' 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.881 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.144 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.144 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.144 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.144 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.144 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.405 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:49.405 16:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.972 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.231 00:16:50.489 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.489 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.489 16:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.489 { 00:16:50.489 "cntlid": 23, 00:16:50.489 "qid": 0, 00:16:50.489 "state": "enabled", 00:16:50.489 "thread": "nvmf_tgt_poll_group_000", 00:16:50.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.489 "listen_address": { 00:16:50.489 "trtype": "TCP", 00:16:50.489 "adrfam": "IPv4", 00:16:50.489 "traddr": "10.0.0.2", 00:16:50.489 "trsvcid": "4420" 00:16:50.489 }, 00:16:50.489 "peer_address": { 00:16:50.489 "trtype": "TCP", 00:16:50.489 "adrfam": "IPv4", 00:16:50.489 "traddr": "10.0.0.1", 00:16:50.489 "trsvcid": "57602" 00:16:50.489 }, 00:16:50.489 "auth": { 00:16:50.489 "state": "completed", 00:16:50.489 "digest": "sha256", 00:16:50.489 "dhgroup": "ffdhe3072" 00:16:50.489 } 00:16:50.489 } 00:16:50.489 ]' 00:16:50.489 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.748 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.006 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:51.006 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.574 16:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.574 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.832 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.090 { 00:16:52.090 "cntlid": 25, 00:16:52.090 "qid": 0, 00:16:52.090 "state": "enabled", 00:16:52.090 "thread": "nvmf_tgt_poll_group_000", 00:16:52.090 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.090 "listen_address": { 00:16:52.090 "trtype": "TCP", 00:16:52.090 "adrfam": "IPv4", 00:16:52.090 "traddr": "10.0.0.2", 00:16:52.090 "trsvcid": "4420" 00:16:52.090 }, 00:16:52.090 "peer_address": { 00:16:52.090 "trtype": "TCP", 00:16:52.090 "adrfam": "IPv4", 00:16:52.090 "traddr": "10.0.0.1", 00:16:52.090 "trsvcid": "57618" 00:16:52.090 }, 00:16:52.090 "auth": { 00:16:52.090 "state": "completed", 00:16:52.090 "digest": "sha256", 00:16:52.090 "dhgroup": "ffdhe4096" 00:16:52.090 } 00:16:52.090 } 00:16:52.090 ]' 00:16:52.090 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.349 16:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.607 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:52.607 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.173 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.174 16:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.432 00:16:53.432 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.432 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.432 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.690 { 00:16:53.690 "cntlid": 27, 00:16:53.690 "qid": 0, 00:16:53.690 "state": "enabled", 00:16:53.690 "thread": "nvmf_tgt_poll_group_000", 00:16:53.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.690 "listen_address": { 00:16:53.690 "trtype": "TCP", 00:16:53.690 "adrfam": "IPv4", 00:16:53.690 "traddr": "10.0.0.2", 00:16:53.690 "trsvcid": "4420" 00:16:53.690 }, 00:16:53.690 "peer_address": { 00:16:53.690 "trtype": "TCP", 00:16:53.690 "adrfam": "IPv4", 00:16:53.690 "traddr": "10.0.0.1", 00:16:53.690 "trsvcid": "57648" 00:16:53.690 }, 00:16:53.690 "auth": { 00:16:53.690 "state": "completed", 00:16:53.690 "digest": "sha256", 00:16:53.690 "dhgroup": "ffdhe4096" 00:16:53.690 } 00:16:53.690 } 00:16:53.690 ]' 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.690 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.949 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.949 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.949 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.949 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.949 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.207 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:54.207 16:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.773 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.032 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.290 { 00:16:55.290 "cntlid": 29, 00:16:55.290 "qid": 0, 00:16:55.290 "state": "enabled", 00:16:55.290 "thread": "nvmf_tgt_poll_group_000", 00:16:55.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.290 "listen_address": { 00:16:55.290 "trtype": "TCP", 00:16:55.290 "adrfam": "IPv4", 00:16:55.290 "traddr": "10.0.0.2", 00:16:55.290 "trsvcid": "4420" 00:16:55.290 }, 00:16:55.290 "peer_address": { 00:16:55.290 "trtype": "TCP", 00:16:55.290 "adrfam": "IPv4", 00:16:55.290 "traddr": "10.0.0.1", 00:16:55.290 "trsvcid": "57680" 00:16:55.290 }, 00:16:55.290 "auth": { 00:16:55.290 "state": "completed", 00:16:55.290 "digest": "sha256", 00:16:55.290 "dhgroup": "ffdhe4096" 00:16:55.290 } 00:16:55.290 } 00:16:55.290 ]' 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.290 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.548 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.548 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.548 16:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.548 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.548 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.548 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.806 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:55.806 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.380 16:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.380 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.639 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.639 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.639 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.639 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.639 00:16:56.897 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.897 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.897 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.897 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.898 { 00:16:56.898 "cntlid": 31, 00:16:56.898 "qid": 0, 00:16:56.898 "state": "enabled", 00:16:56.898 "thread": "nvmf_tgt_poll_group_000", 00:16:56.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.898 "listen_address": { 00:16:56.898 "trtype": "TCP", 00:16:56.898 "adrfam": "IPv4", 00:16:56.898 "traddr": "10.0.0.2", 00:16:56.898 "trsvcid": "4420" 00:16:56.898 }, 00:16:56.898 "peer_address": { 00:16:56.898 "trtype": "TCP", 00:16:56.898 "adrfam": "IPv4", 00:16:56.898 "traddr": "10.0.0.1", 00:16:56.898 "trsvcid": "57704" 00:16:56.898 }, 00:16:56.898 "auth": { 00:16:56.898 "state": "completed", 00:16:56.898 "digest": "sha256", 00:16:56.898 "dhgroup": "ffdhe4096" 00:16:56.898 } 00:16:56.898 } 00:16:56.898 ]' 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.898 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.156 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.156 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.156 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.156 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.156 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.505 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:57.505 16:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:16:57.763 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.763 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.763 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.763 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.021 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.586 00:16:58.586 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.586 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.586 16:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.586 { 00:16:58.586 "cntlid": 33, 00:16:58.586 "qid": 0, 00:16:58.586 "state": "enabled", 00:16:58.586 "thread": "nvmf_tgt_poll_group_000", 00:16:58.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:58.586 "listen_address": { 00:16:58.586 "trtype": "TCP", 00:16:58.586 "adrfam": "IPv4", 00:16:58.586 "traddr": "10.0.0.2", 00:16:58.586 "trsvcid": "4420" 00:16:58.586 }, 00:16:58.586 "peer_address": { 00:16:58.586 "trtype": "TCP", 00:16:58.586 "adrfam": "IPv4", 00:16:58.586 "traddr": "10.0.0.1", 00:16:58.586 "trsvcid": "57720" 00:16:58.586 }, 00:16:58.586 "auth": { 00:16:58.586 "state": "completed", 00:16:58.586 "digest": "sha256", 00:16:58.586 "dhgroup": "ffdhe6144" 00:16:58.586 } 00:16:58.586 } 00:16:58.586 ]' 00:16:58.586 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.844 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.103 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:59.103 16:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.669 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.928 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.190 00:17:00.190 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.190 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.190 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.448 { 00:17:00.448 "cntlid": 35, 00:17:00.448 "qid": 0, 00:17:00.448 "state": "enabled", 00:17:00.448 "thread": "nvmf_tgt_poll_group_000", 00:17:00.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:00.448 "listen_address": { 00:17:00.448 "trtype": "TCP", 00:17:00.448 "adrfam": "IPv4", 00:17:00.448 "traddr": "10.0.0.2", 00:17:00.448 "trsvcid": "4420" 00:17:00.448 }, 00:17:00.448 "peer_address": { 00:17:00.448 "trtype": "TCP", 00:17:00.448 "adrfam": "IPv4", 00:17:00.448 "traddr": "10.0.0.1", 00:17:00.448 "trsvcid": "40604" 00:17:00.448 }, 00:17:00.448 "auth": { 00:17:00.448 "state": "completed", 00:17:00.448 "digest": "sha256", 00:17:00.448 "dhgroup": "ffdhe6144" 00:17:00.448 } 00:17:00.448 } 00:17:00.448 ]' 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.448 16:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.449 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.707 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:00.707 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.275 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.535 16:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.535 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.535 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.535 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.535 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.794 00:17:01.794 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.794 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.794 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.054 { 00:17:02.054 "cntlid": 37, 00:17:02.054 "qid": 0, 00:17:02.054 "state": "enabled", 00:17:02.054 "thread": "nvmf_tgt_poll_group_000", 00:17:02.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.054 "listen_address": { 00:17:02.054 "trtype": "TCP", 00:17:02.054 "adrfam": "IPv4", 00:17:02.054 "traddr": "10.0.0.2", 00:17:02.054 "trsvcid": "4420" 00:17:02.054 }, 00:17:02.054 "peer_address": { 00:17:02.054 "trtype": "TCP", 00:17:02.054 "adrfam": "IPv4", 00:17:02.054 "traddr": "10.0.0.1", 00:17:02.054 "trsvcid": "40616" 00:17:02.054 }, 00:17:02.054 "auth": { 00:17:02.054 "state": "completed", 00:17:02.054 "digest": "sha256", 00:17:02.054 "dhgroup": "ffdhe6144" 00:17:02.054 } 00:17:02.054 } 00:17:02.054 ]' 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.054 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.314 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:02.314 16:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:02.880 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.880 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.880 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.880 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.881 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.881 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.881 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:02.881 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.139 16:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.398 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.658 { 00:17:03.658 "cntlid": 39, 00:17:03.658 "qid": 0, 00:17:03.658 "state": "enabled", 00:17:03.658 "thread": "nvmf_tgt_poll_group_000", 00:17:03.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:03.658 "listen_address": { 00:17:03.658 "trtype": "TCP", 00:17:03.658 "adrfam": "IPv4", 00:17:03.658 "traddr": "10.0.0.2", 00:17:03.658 "trsvcid": "4420" 00:17:03.658 }, 00:17:03.658 "peer_address": { 00:17:03.658 "trtype": "TCP", 00:17:03.658 "adrfam": "IPv4", 00:17:03.658 "traddr": "10.0.0.1", 00:17:03.658 "trsvcid": "40638" 00:17:03.658 }, 00:17:03.658 "auth": { 00:17:03.658 "state": "completed", 00:17:03.658 "digest": "sha256", 00:17:03.658 "dhgroup": "ffdhe6144" 00:17:03.658 } 00:17:03.658 } 00:17:03.658 ]' 00:17:03.658 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.917 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.176 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:04.176 16:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.743 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.003 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.569 00:17:05.569 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.569 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.569 16:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.569 { 00:17:05.569 "cntlid": 41, 00:17:05.569 "qid": 0, 00:17:05.569 "state": "enabled", 00:17:05.569 "thread": "nvmf_tgt_poll_group_000", 00:17:05.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.569 "listen_address": { 00:17:05.569 "trtype": "TCP", 00:17:05.569 "adrfam": "IPv4", 00:17:05.569 "traddr": "10.0.0.2", 00:17:05.569 "trsvcid": "4420" 00:17:05.569 }, 00:17:05.569 "peer_address": { 00:17:05.569 "trtype": "TCP", 00:17:05.569 "adrfam": "IPv4", 00:17:05.569 "traddr": "10.0.0.1", 00:17:05.569 "trsvcid": "40664" 00:17:05.569 }, 00:17:05.569 "auth": { 00:17:05.569 "state": "completed", 00:17:05.569 "digest": "sha256", 00:17:05.569 "dhgroup": "ffdhe8192" 00:17:05.569 } 00:17:05.569 } 00:17:05.569 ]' 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.569 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:05.829 16:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:06.397 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.397 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.397 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.397 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.656 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.223 00:17:07.223 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.223 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.223 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.483 { 00:17:07.483 "cntlid": 43, 00:17:07.483 "qid": 0, 00:17:07.483 "state": "enabled", 00:17:07.483 "thread": "nvmf_tgt_poll_group_000", 00:17:07.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.483 "listen_address": { 00:17:07.483 "trtype": "TCP", 00:17:07.483 "adrfam": "IPv4", 00:17:07.483 "traddr": "10.0.0.2", 00:17:07.483 "trsvcid": "4420" 00:17:07.483 }, 00:17:07.483 "peer_address": { 00:17:07.483 "trtype": "TCP", 00:17:07.483 "adrfam": "IPv4", 00:17:07.483 "traddr": "10.0.0.1", 00:17:07.483 "trsvcid": "40690" 00:17:07.483 }, 00:17:07.483 "auth": { 00:17:07.483 "state": "completed", 00:17:07.483 "digest": "sha256", 00:17:07.483 "dhgroup": "ffdhe8192" 00:17:07.483 } 00:17:07.483 } 00:17:07.483 ]' 00:17:07.483 16:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.483 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.742 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:07.742 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.310 16:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.569 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.570 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.570 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.570 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.138 00:17:09.138 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.138 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.138 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.398 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.398 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.398 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.398 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.398 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.398 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.398 { 00:17:09.398 "cntlid": 45, 00:17:09.398 "qid": 0, 00:17:09.398 "state": "enabled", 00:17:09.398 "thread": "nvmf_tgt_poll_group_000", 00:17:09.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.398 "listen_address": { 00:17:09.398 "trtype": "TCP", 00:17:09.398 "adrfam": "IPv4", 00:17:09.398 "traddr": "10.0.0.2", 00:17:09.398 "trsvcid": "4420" 00:17:09.398 }, 00:17:09.398 "peer_address": { 00:17:09.398 "trtype": "TCP", 00:17:09.398 "adrfam": "IPv4", 00:17:09.398 "traddr": "10.0.0.1", 00:17:09.398 "trsvcid": "40718" 00:17:09.398 }, 00:17:09.399 "auth": { 00:17:09.399 "state": "completed", 00:17:09.399 "digest": "sha256", 00:17:09.399 "dhgroup": "ffdhe8192" 00:17:09.399 } 00:17:09.399 } 00:17:09.399 ]' 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.399 16:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.658 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:09.658 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.228 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.487 16:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.747 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.005 { 00:17:11.005 "cntlid": 47, 00:17:11.005 "qid": 0, 00:17:11.005 "state": "enabled", 00:17:11.005 "thread": "nvmf_tgt_poll_group_000", 00:17:11.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.005 "listen_address": { 00:17:11.005 "trtype": "TCP", 00:17:11.005 "adrfam": "IPv4", 00:17:11.005 "traddr": "10.0.0.2", 00:17:11.005 "trsvcid": "4420" 00:17:11.005 }, 00:17:11.005 "peer_address": { 00:17:11.005 "trtype": "TCP", 00:17:11.005 "adrfam": "IPv4", 00:17:11.005 "traddr": "10.0.0.1", 00:17:11.005 "trsvcid": "39710" 00:17:11.005 }, 00:17:11.005 "auth": { 00:17:11.005 "state": "completed", 00:17:11.005 "digest": "sha256", 00:17:11.005 "dhgroup": "ffdhe8192" 00:17:11.005 } 00:17:11.005 } 00:17:11.005 ]' 00:17:11.005 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.264 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.265 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.265 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.265 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.265 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.265 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.265 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.523 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:11.523 16:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.092 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.351 00:17:12.351 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.351 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.351 16:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.609 { 00:17:12.609 "cntlid": 49, 00:17:12.609 "qid": 0, 00:17:12.609 "state": "enabled", 00:17:12.609 "thread": "nvmf_tgt_poll_group_000", 00:17:12.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.609 "listen_address": { 00:17:12.609 "trtype": "TCP", 00:17:12.609 "adrfam": "IPv4", 00:17:12.609 "traddr": "10.0.0.2", 00:17:12.609 "trsvcid": "4420" 00:17:12.609 }, 00:17:12.609 "peer_address": { 00:17:12.609 "trtype": "TCP", 00:17:12.609 "adrfam": "IPv4", 00:17:12.609 "traddr": "10.0.0.1", 00:17:12.609 "trsvcid": "39744" 00:17:12.609 }, 00:17:12.609 "auth": { 00:17:12.609 "state": "completed", 00:17:12.609 "digest": "sha384", 00:17:12.609 "dhgroup": "null" 00:17:12.609 } 00:17:12.609 } 00:17:12.609 ]' 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.609 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.867 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.867 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:12.867 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.867 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.867 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.867 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.126 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:13.126 16:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.695 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.954 00:17:13.954 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.954 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.954 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.212 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.212 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.212 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.212 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.212 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.212 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.213 { 00:17:14.213 "cntlid": 51, 00:17:14.213 "qid": 0, 00:17:14.213 "state": "enabled", 00:17:14.213 "thread": "nvmf_tgt_poll_group_000", 00:17:14.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.213 "listen_address": { 00:17:14.213 "trtype": "TCP", 00:17:14.213 "adrfam": "IPv4", 00:17:14.213 "traddr": "10.0.0.2", 00:17:14.213 "trsvcid": "4420" 00:17:14.213 }, 00:17:14.213 "peer_address": { 00:17:14.213 "trtype": "TCP", 00:17:14.213 "adrfam": "IPv4", 00:17:14.213 "traddr": "10.0.0.1", 00:17:14.213 "trsvcid": "39782" 00:17:14.213 }, 00:17:14.213 "auth": { 00:17:14.213 "state": "completed", 00:17:14.213 "digest": "sha384", 00:17:14.213 "dhgroup": "null" 00:17:14.213 } 00:17:14.213 } 00:17:14.213 ]' 00:17:14.213 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.213 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.213 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.213 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.213 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.472 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.472 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.472 16:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.472 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:14.472 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.040 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.299 16:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.558 00:17:15.558 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.558 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.558 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.817 { 00:17:15.817 "cntlid": 53, 00:17:15.817 "qid": 0, 00:17:15.817 "state": "enabled", 00:17:15.817 "thread": "nvmf_tgt_poll_group_000", 00:17:15.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.817 "listen_address": { 00:17:15.817 "trtype": "TCP", 00:17:15.817 "adrfam": "IPv4", 00:17:15.817 "traddr": "10.0.0.2", 00:17:15.817 "trsvcid": "4420" 00:17:15.817 }, 00:17:15.817 "peer_address": { 00:17:15.817 "trtype": "TCP", 00:17:15.817 "adrfam": "IPv4", 00:17:15.817 "traddr": "10.0.0.1", 00:17:15.817 "trsvcid": "39806" 00:17:15.817 }, 00:17:15.817 "auth": { 00:17:15.817 "state": "completed", 00:17:15.817 "digest": "sha384", 00:17:15.817 "dhgroup": "null" 00:17:15.817 } 00:17:15.817 } 00:17:15.817 ]' 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.817 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.076 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.076 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.076 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.076 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:16.076 16:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.644 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.903 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.162 00:17:17.162 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.162 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.162 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.421 { 00:17:17.421 "cntlid": 55, 00:17:17.421 "qid": 0, 00:17:17.421 "state": "enabled", 00:17:17.421 "thread": "nvmf_tgt_poll_group_000", 00:17:17.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.421 "listen_address": { 00:17:17.421 "trtype": "TCP", 00:17:17.421 "adrfam": "IPv4", 00:17:17.421 "traddr": "10.0.0.2", 00:17:17.421 "trsvcid": "4420" 00:17:17.421 }, 00:17:17.421 "peer_address": { 00:17:17.421 "trtype": "TCP", 00:17:17.421 "adrfam": "IPv4", 00:17:17.421 "traddr": "10.0.0.1", 00:17:17.421 "trsvcid": "39816" 00:17:17.421 }, 00:17:17.421 "auth": { 00:17:17.421 "state": "completed", 00:17:17.421 "digest": "sha384", 00:17:17.421 "dhgroup": "null" 00:17:17.421 } 00:17:17.421 } 00:17:17.421 ]' 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.421 16:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.421 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.421 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.421 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.681 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:17.681 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.249 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.508 16:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.766 00:17:18.766 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.766 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.766 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.025 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.025 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.025 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.025 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.025 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.025 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.025 { 00:17:19.025 "cntlid": 57, 00:17:19.025 "qid": 0, 00:17:19.025 "state": "enabled", 00:17:19.025 "thread": "nvmf_tgt_poll_group_000", 00:17:19.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.025 "listen_address": { 00:17:19.025 "trtype": "TCP", 00:17:19.025 "adrfam": "IPv4", 00:17:19.025 "traddr": "10.0.0.2", 00:17:19.025 "trsvcid": "4420" 00:17:19.025 }, 00:17:19.025 "peer_address": { 00:17:19.025 "trtype": "TCP", 00:17:19.025 "adrfam": "IPv4", 00:17:19.026 "traddr": "10.0.0.1", 00:17:19.026 "trsvcid": "39842" 00:17:19.026 }, 00:17:19.026 "auth": { 00:17:19.026 "state": "completed", 00:17:19.026 "digest": "sha384", 00:17:19.026 "dhgroup": "ffdhe2048" 00:17:19.026 } 00:17:19.026 } 00:17:19.026 ]' 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.026 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.284 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:19.284 16:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.850 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.109 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.383 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.383 16:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.383 { 00:17:20.383 "cntlid": 59, 00:17:20.383 "qid": 0, 00:17:20.383 "state": "enabled", 00:17:20.383 "thread": "nvmf_tgt_poll_group_000", 00:17:20.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.383 "listen_address": { 00:17:20.383 "trtype": "TCP", 00:17:20.383 "adrfam": "IPv4", 00:17:20.383 "traddr": "10.0.0.2", 00:17:20.383 "trsvcid": "4420" 00:17:20.383 }, 00:17:20.383 "peer_address": { 00:17:20.383 "trtype": "TCP", 00:17:20.383 "adrfam": "IPv4", 00:17:20.383 "traddr": "10.0.0.1", 00:17:20.383 "trsvcid": "38604" 00:17:20.383 }, 00:17:20.383 "auth": { 00:17:20.383 "state": "completed", 00:17:20.383 "digest": "sha384", 00:17:20.383 "dhgroup": "ffdhe2048" 00:17:20.383 } 00:17:20.383 } 00:17:20.383 ]' 00:17:20.383 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.642 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.900 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:20.900 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.470 16:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.470 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.748 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.748 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.027 { 00:17:22.027 "cntlid": 61, 00:17:22.027 "qid": 0, 00:17:22.027 "state": "enabled", 00:17:22.027 "thread": "nvmf_tgt_poll_group_000", 00:17:22.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:22.027 "listen_address": { 00:17:22.027 "trtype": "TCP", 00:17:22.027 "adrfam": "IPv4", 00:17:22.027 "traddr": "10.0.0.2", 00:17:22.027 "trsvcid": "4420" 00:17:22.027 }, 00:17:22.027 "peer_address": { 00:17:22.027 "trtype": "TCP", 00:17:22.027 "adrfam": "IPv4", 00:17:22.027 "traddr": "10.0.0.1", 00:17:22.027 "trsvcid": "38632" 00:17:22.027 }, 00:17:22.027 "auth": { 00:17:22.027 "state": "completed", 00:17:22.027 "digest": "sha384", 00:17:22.027 "dhgroup": "ffdhe2048" 00:17:22.027 } 00:17:22.027 } 00:17:22.027 ]' 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.027 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.339 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.339 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.339 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.339 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:22.339 16:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:22.906 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.165 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.424 00:17:23.424 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.424 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.424 16:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.682 { 00:17:23.682 "cntlid": 63, 00:17:23.682 "qid": 0, 00:17:23.682 "state": "enabled", 00:17:23.682 "thread": "nvmf_tgt_poll_group_000", 00:17:23.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.682 "listen_address": { 00:17:23.682 "trtype": "TCP", 00:17:23.682 "adrfam": "IPv4", 00:17:23.682 "traddr": "10.0.0.2", 00:17:23.682 "trsvcid": "4420" 00:17:23.682 }, 00:17:23.682 "peer_address": { 00:17:23.682 "trtype": "TCP", 00:17:23.682 "adrfam": "IPv4", 00:17:23.682 "traddr": "10.0.0.1", 00:17:23.682 "trsvcid": "38656" 00:17:23.682 }, 00:17:23.682 "auth": { 00:17:23.682 "state": "completed", 00:17:23.682 "digest": "sha384", 00:17:23.682 "dhgroup": "ffdhe2048" 00:17:23.682 } 00:17:23.682 } 00:17:23.682 ]' 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.682 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.683 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.940 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:23.940 16:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.507 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.766 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.023 00:17:25.023 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.023 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.023 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.280 { 00:17:25.280 "cntlid": 65, 00:17:25.280 "qid": 0, 00:17:25.280 "state": "enabled", 00:17:25.280 "thread": "nvmf_tgt_poll_group_000", 00:17:25.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:25.280 "listen_address": { 00:17:25.280 "trtype": "TCP", 00:17:25.280 "adrfam": "IPv4", 00:17:25.280 "traddr": "10.0.0.2", 00:17:25.280 "trsvcid": "4420" 00:17:25.280 }, 00:17:25.280 "peer_address": { 00:17:25.280 "trtype": "TCP", 00:17:25.280 "adrfam": "IPv4", 00:17:25.280 "traddr": "10.0.0.1", 00:17:25.280 "trsvcid": "38700" 00:17:25.280 }, 00:17:25.280 "auth": { 00:17:25.280 "state": "completed", 00:17:25.280 "digest": "sha384", 00:17:25.280 "dhgroup": "ffdhe3072" 00:17:25.280 } 00:17:25.280 } 00:17:25.280 ]' 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.280 16:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.538 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:25.538 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.105 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.364 16:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.622 00:17:26.622 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.622 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.622 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.881 { 00:17:26.881 "cntlid": 67, 00:17:26.881 "qid": 0, 00:17:26.881 "state": "enabled", 00:17:26.881 "thread": "nvmf_tgt_poll_group_000", 00:17:26.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.881 "listen_address": { 00:17:26.881 "trtype": "TCP", 00:17:26.881 "adrfam": "IPv4", 00:17:26.881 "traddr": "10.0.0.2", 00:17:26.881 "trsvcid": "4420" 00:17:26.881 }, 00:17:26.881 "peer_address": { 00:17:26.881 "trtype": "TCP", 00:17:26.881 "adrfam": "IPv4", 00:17:26.881 "traddr": "10.0.0.1", 00:17:26.881 "trsvcid": "38714" 00:17:26.881 }, 00:17:26.881 "auth": { 00:17:26.881 "state": "completed", 00:17:26.881 "digest": "sha384", 00:17:26.881 "dhgroup": "ffdhe3072" 00:17:26.881 } 00:17:26.881 } 00:17:26.881 ]' 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.881 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.140 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:27.140 16:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.708 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.967 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.226 00:17:28.226 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.226 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.226 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.485 { 00:17:28.485 "cntlid": 69, 00:17:28.485 "qid": 0, 00:17:28.485 "state": "enabled", 00:17:28.485 "thread": "nvmf_tgt_poll_group_000", 00:17:28.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.485 "listen_address": { 00:17:28.485 "trtype": "TCP", 00:17:28.485 "adrfam": "IPv4", 00:17:28.485 "traddr": "10.0.0.2", 00:17:28.485 "trsvcid": "4420" 00:17:28.485 }, 00:17:28.485 "peer_address": { 00:17:28.485 "trtype": "TCP", 00:17:28.485 "adrfam": "IPv4", 00:17:28.485 "traddr": "10.0.0.1", 00:17:28.485 "trsvcid": "38746" 00:17:28.485 }, 00:17:28.485 "auth": { 00:17:28.485 "state": "completed", 00:17:28.485 "digest": "sha384", 00:17:28.485 "dhgroup": "ffdhe3072" 00:17:28.485 } 00:17:28.485 } 00:17:28.485 ]' 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.485 16:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.485 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.485 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.485 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.743 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:28.743 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.311 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:29.569 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:29.569 16:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.569 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.570 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.570 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.570 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.828 00:17:29.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.828 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.087 { 00:17:30.087 "cntlid": 71, 00:17:30.087 "qid": 0, 00:17:30.087 "state": "enabled", 00:17:30.087 "thread": "nvmf_tgt_poll_group_000", 00:17:30.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.087 "listen_address": { 00:17:30.087 "trtype": "TCP", 00:17:30.087 "adrfam": "IPv4", 00:17:30.087 "traddr": "10.0.0.2", 00:17:30.087 "trsvcid": "4420" 00:17:30.087 }, 00:17:30.087 "peer_address": { 00:17:30.087 "trtype": "TCP", 00:17:30.087 "adrfam": "IPv4", 00:17:30.087 "traddr": "10.0.0.1", 00:17:30.087 "trsvcid": "34904" 00:17:30.087 }, 00:17:30.087 "auth": { 00:17:30.087 "state": "completed", 00:17:30.087 "digest": "sha384", 00:17:30.087 "dhgroup": "ffdhe3072" 00:17:30.087 } 00:17:30.087 } 00:17:30.087 ]' 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.087 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.345 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:30.345 16:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.912 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.171 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.429 00:17:31.429 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.429 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.429 16:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.429 { 00:17:31.429 "cntlid": 73, 00:17:31.429 "qid": 0, 00:17:31.429 "state": "enabled", 00:17:31.429 "thread": "nvmf_tgt_poll_group_000", 00:17:31.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.429 "listen_address": { 00:17:31.429 "trtype": "TCP", 00:17:31.429 "adrfam": "IPv4", 00:17:31.429 "traddr": "10.0.0.2", 00:17:31.429 "trsvcid": "4420" 00:17:31.429 }, 00:17:31.429 "peer_address": { 00:17:31.429 "trtype": "TCP", 00:17:31.429 "adrfam": "IPv4", 00:17:31.429 "traddr": "10.0.0.1", 00:17:31.429 "trsvcid": "34934" 00:17:31.429 }, 00:17:31.429 "auth": { 00:17:31.429 "state": "completed", 00:17:31.429 "digest": "sha384", 00:17:31.429 "dhgroup": "ffdhe4096" 00:17:31.429 } 00:17:31.429 } 00:17:31.429 ]' 00:17:31.429 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.688 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.946 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:31.946 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.515 16:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.515 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.775 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.775 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.775 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.775 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.775 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.034 { 00:17:33.034 "cntlid": 75, 00:17:33.034 "qid": 0, 00:17:33.034 "state": "enabled", 00:17:33.034 "thread": "nvmf_tgt_poll_group_000", 00:17:33.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:33.034 "listen_address": { 00:17:33.034 "trtype": "TCP", 00:17:33.034 "adrfam": "IPv4", 00:17:33.034 "traddr": "10.0.0.2", 00:17:33.034 "trsvcid": "4420" 00:17:33.034 }, 00:17:33.034 "peer_address": { 00:17:33.034 "trtype": "TCP", 00:17:33.034 "adrfam": "IPv4", 00:17:33.034 "traddr": "10.0.0.1", 00:17:33.034 "trsvcid": "34976" 00:17:33.034 }, 00:17:33.034 "auth": { 00:17:33.034 "state": "completed", 00:17:33.034 "digest": "sha384", 00:17:33.034 "dhgroup": "ffdhe4096" 00:17:33.034 } 00:17:33.034 } 00:17:33.034 ]' 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.034 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.292 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.292 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.292 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.292 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.292 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.550 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:33.550 16:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:34.116 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.117 16:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.375 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.634 { 00:17:34.634 "cntlid": 77, 00:17:34.634 "qid": 0, 00:17:34.634 "state": "enabled", 00:17:34.634 "thread": "nvmf_tgt_poll_group_000", 00:17:34.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.634 "listen_address": { 00:17:34.634 "trtype": "TCP", 00:17:34.634 "adrfam": "IPv4", 00:17:34.634 "traddr": "10.0.0.2", 00:17:34.634 "trsvcid": "4420" 00:17:34.634 }, 00:17:34.634 "peer_address": { 00:17:34.634 "trtype": "TCP", 00:17:34.634 "adrfam": "IPv4", 00:17:34.634 "traddr": "10.0.0.1", 00:17:34.634 "trsvcid": "34994" 00:17:34.634 }, 00:17:34.634 "auth": { 00:17:34.634 "state": "completed", 00:17:34.634 "digest": "sha384", 00:17:34.634 "dhgroup": "ffdhe4096" 00:17:34.634 } 00:17:34.634 } 00:17:34.634 ]' 00:17:34.634 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.892 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.151 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:35.151 16:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.718 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.976 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:35.976 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.976 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.976 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.976 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:35.976 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.977 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.236 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.236 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.237 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.237 { 00:17:36.237 "cntlid": 79, 00:17:36.237 "qid": 0, 00:17:36.237 "state": "enabled", 00:17:36.237 "thread": "nvmf_tgt_poll_group_000", 00:17:36.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.237 "listen_address": { 00:17:36.237 "trtype": "TCP", 00:17:36.237 "adrfam": "IPv4", 00:17:36.237 "traddr": "10.0.0.2", 00:17:36.237 "trsvcid": "4420" 00:17:36.237 }, 00:17:36.237 "peer_address": { 00:17:36.237 "trtype": "TCP", 00:17:36.237 "adrfam": "IPv4", 00:17:36.237 "traddr": "10.0.0.1", 00:17:36.237 "trsvcid": "35034" 00:17:36.237 }, 00:17:36.237 "auth": { 00:17:36.237 "state": "completed", 00:17:36.237 "digest": "sha384", 00:17:36.237 "dhgroup": "ffdhe4096" 00:17:36.237 } 00:17:36.237 } 00:17:36.237 ]' 00:17:36.237 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.497 16:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.755 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:36.755 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.322 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.581 16:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.839 00:17:37.839 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.839 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.839 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.097 { 00:17:38.097 "cntlid": 81, 00:17:38.097 "qid": 0, 00:17:38.097 "state": "enabled", 00:17:38.097 "thread": "nvmf_tgt_poll_group_000", 00:17:38.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.097 "listen_address": { 00:17:38.097 "trtype": "TCP", 00:17:38.097 "adrfam": "IPv4", 00:17:38.097 "traddr": "10.0.0.2", 00:17:38.097 "trsvcid": "4420" 00:17:38.097 }, 00:17:38.097 "peer_address": { 00:17:38.097 "trtype": "TCP", 00:17:38.097 "adrfam": "IPv4", 00:17:38.097 "traddr": "10.0.0.1", 00:17:38.097 "trsvcid": "35060" 00:17:38.097 }, 00:17:38.097 "auth": { 00:17:38.097 "state": "completed", 00:17:38.097 "digest": "sha384", 00:17:38.097 "dhgroup": "ffdhe6144" 00:17:38.097 } 00:17:38.097 } 00:17:38.097 ]' 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.097 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.356 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:38.356 16:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.922 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.181 16:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.439 00:17:39.439 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.439 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.439 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.698 { 00:17:39.698 "cntlid": 83, 00:17:39.698 "qid": 0, 00:17:39.698 "state": "enabled", 00:17:39.698 "thread": "nvmf_tgt_poll_group_000", 00:17:39.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:39.698 "listen_address": { 00:17:39.698 "trtype": "TCP", 00:17:39.698 "adrfam": "IPv4", 00:17:39.698 "traddr": "10.0.0.2", 00:17:39.698 "trsvcid": "4420" 00:17:39.698 }, 00:17:39.698 "peer_address": { 00:17:39.698 "trtype": "TCP", 00:17:39.698 "adrfam": "IPv4", 00:17:39.698 "traddr": "10.0.0.1", 00:17:39.698 "trsvcid": "60088" 00:17:39.698 }, 00:17:39.698 "auth": { 00:17:39.698 "state": "completed", 00:17:39.698 "digest": "sha384", 00:17:39.698 "dhgroup": "ffdhe6144" 00:17:39.698 } 00:17:39.698 } 00:17:39.698 ]' 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.698 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.956 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.956 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.956 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.956 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:39.956 16:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.523 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.782 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.041 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.300 { 00:17:41.300 "cntlid": 85, 00:17:41.300 "qid": 0, 00:17:41.300 "state": "enabled", 00:17:41.300 "thread": "nvmf_tgt_poll_group_000", 00:17:41.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:41.300 "listen_address": { 00:17:41.300 "trtype": "TCP", 00:17:41.300 "adrfam": "IPv4", 00:17:41.300 "traddr": "10.0.0.2", 00:17:41.300 "trsvcid": "4420" 00:17:41.300 }, 00:17:41.300 "peer_address": { 00:17:41.300 "trtype": "TCP", 00:17:41.300 "adrfam": "IPv4", 00:17:41.300 "traddr": "10.0.0.1", 00:17:41.300 "trsvcid": "60120" 00:17:41.300 }, 00:17:41.300 "auth": { 00:17:41.300 "state": "completed", 00:17:41.300 "digest": "sha384", 00:17:41.300 "dhgroup": "ffdhe6144" 00:17:41.300 } 00:17:41.300 } 00:17:41.300 ]' 00:17:41.300 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.559 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.559 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.559 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.559 16:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.559 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.559 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.559 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.817 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:41.817 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.385 16:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.953 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.953 { 00:17:42.953 "cntlid": 87, 00:17:42.953 "qid": 0, 00:17:42.953 "state": "enabled", 00:17:42.953 "thread": "nvmf_tgt_poll_group_000", 00:17:42.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.953 "listen_address": { 00:17:42.953 "trtype": "TCP", 00:17:42.953 "adrfam": "IPv4", 00:17:42.953 "traddr": "10.0.0.2", 00:17:42.953 "trsvcid": "4420" 00:17:42.953 }, 00:17:42.953 "peer_address": { 00:17:42.953 "trtype": "TCP", 00:17:42.953 "adrfam": "IPv4", 00:17:42.953 "traddr": "10.0.0.1", 00:17:42.953 "trsvcid": "60152" 00:17:42.953 }, 00:17:42.953 "auth": { 00:17:42.953 "state": "completed", 00:17:42.953 "digest": "sha384", 00:17:42.953 "dhgroup": "ffdhe6144" 00:17:42.953 } 00:17:42.953 } 00:17:42.953 ]' 00:17:42.953 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.212 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.472 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:43.472 16:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.040 16:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.609 00:17:44.609 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.609 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.609 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.866 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.867 { 00:17:44.867 "cntlid": 89, 00:17:44.867 "qid": 0, 00:17:44.867 "state": "enabled", 00:17:44.867 "thread": "nvmf_tgt_poll_group_000", 00:17:44.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:44.867 "listen_address": { 00:17:44.867 "trtype": "TCP", 00:17:44.867 "adrfam": "IPv4", 00:17:44.867 "traddr": "10.0.0.2", 00:17:44.867 "trsvcid": "4420" 00:17:44.867 }, 00:17:44.867 "peer_address": { 00:17:44.867 "trtype": "TCP", 00:17:44.867 "adrfam": "IPv4", 00:17:44.867 "traddr": "10.0.0.1", 00:17:44.867 "trsvcid": "60180" 00:17:44.867 }, 00:17:44.867 "auth": { 00:17:44.867 "state": "completed", 00:17:44.867 "digest": "sha384", 00:17:44.867 "dhgroup": "ffdhe8192" 00:17:44.867 } 00:17:44.867 } 00:17:44.867 ]' 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.867 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.124 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:45.124 16:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.689 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.947 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.513 00:17:46.513 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.513 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.513 16:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.771 { 00:17:46.771 "cntlid": 91, 00:17:46.771 "qid": 0, 00:17:46.771 "state": "enabled", 00:17:46.771 "thread": "nvmf_tgt_poll_group_000", 00:17:46.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.771 "listen_address": { 00:17:46.771 "trtype": "TCP", 00:17:46.771 "adrfam": "IPv4", 00:17:46.771 "traddr": "10.0.0.2", 00:17:46.771 "trsvcid": "4420" 00:17:46.771 }, 00:17:46.771 "peer_address": { 00:17:46.771 "trtype": "TCP", 00:17:46.771 "adrfam": "IPv4", 00:17:46.771 "traddr": "10.0.0.1", 00:17:46.771 "trsvcid": "60200" 00:17:46.771 }, 00:17:46.771 "auth": { 00:17:46.771 "state": "completed", 00:17:46.771 "digest": "sha384", 00:17:46.771 "dhgroup": "ffdhe8192" 00:17:46.771 } 00:17:46.771 } 00:17:46.771 ]' 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.771 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.030 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:47.030 16:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.616 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.875 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.442 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.442 { 00:17:48.442 "cntlid": 93, 00:17:48.442 "qid": 0, 00:17:48.442 "state": "enabled", 00:17:48.442 "thread": "nvmf_tgt_poll_group_000", 00:17:48.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.442 "listen_address": { 00:17:48.442 "trtype": "TCP", 00:17:48.442 "adrfam": "IPv4", 00:17:48.442 "traddr": "10.0.0.2", 00:17:48.442 "trsvcid": "4420" 00:17:48.442 }, 00:17:48.442 "peer_address": { 00:17:48.442 "trtype": "TCP", 00:17:48.442 "adrfam": "IPv4", 00:17:48.442 "traddr": "10.0.0.1", 00:17:48.442 "trsvcid": "60228" 00:17:48.442 }, 00:17:48.442 "auth": { 00:17:48.442 "state": "completed", 00:17:48.442 "digest": "sha384", 00:17:48.442 "dhgroup": "ffdhe8192" 00:17:48.442 } 00:17:48.442 } 00:17:48.442 ]' 00:17:48.442 16:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.442 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.442 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.442 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.442 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.700 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.700 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.700 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.700 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:48.700 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.267 16:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.526 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.091 00:17:50.091 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.091 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.091 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.349 { 00:17:50.349 "cntlid": 95, 00:17:50.349 "qid": 0, 00:17:50.349 "state": "enabled", 00:17:50.349 "thread": "nvmf_tgt_poll_group_000", 00:17:50.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.349 "listen_address": { 00:17:50.349 "trtype": "TCP", 00:17:50.349 "adrfam": "IPv4", 00:17:50.349 "traddr": "10.0.0.2", 00:17:50.349 "trsvcid": "4420" 00:17:50.349 }, 00:17:50.349 "peer_address": { 00:17:50.349 "trtype": "TCP", 00:17:50.349 "adrfam": "IPv4", 00:17:50.349 "traddr": "10.0.0.1", 00:17:50.349 "trsvcid": "39436" 00:17:50.349 }, 00:17:50.349 "auth": { 00:17:50.349 "state": "completed", 00:17:50.349 "digest": "sha384", 00:17:50.349 "dhgroup": "ffdhe8192" 00:17:50.349 } 00:17:50.349 } 00:17:50.349 ]' 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.349 16:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.607 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:50.607 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.173 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.431 16:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.688 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.688 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.947 { 00:17:51.947 "cntlid": 97, 00:17:51.947 "qid": 0, 00:17:51.947 "state": "enabled", 00:17:51.947 "thread": "nvmf_tgt_poll_group_000", 00:17:51.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:51.947 "listen_address": { 00:17:51.947 "trtype": "TCP", 00:17:51.947 "adrfam": "IPv4", 00:17:51.947 "traddr": "10.0.0.2", 00:17:51.947 "trsvcid": "4420" 00:17:51.947 }, 00:17:51.947 "peer_address": { 00:17:51.947 "trtype": "TCP", 00:17:51.947 "adrfam": "IPv4", 00:17:51.947 "traddr": "10.0.0.1", 00:17:51.947 "trsvcid": "39466" 00:17:51.947 }, 00:17:51.947 "auth": { 00:17:51.947 "state": "completed", 00:17:51.947 "digest": "sha512", 00:17:51.947 "dhgroup": "null" 00:17:51.947 } 00:17:51.947 } 00:17:51.947 ]' 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.947 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.205 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:52.205 16:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.771 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.028 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.028 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.286 { 00:17:53.286 "cntlid": 99, 00:17:53.286 "qid": 0, 00:17:53.286 "state": "enabled", 00:17:53.286 "thread": "nvmf_tgt_poll_group_000", 00:17:53.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:53.286 "listen_address": { 00:17:53.286 "trtype": "TCP", 00:17:53.286 "adrfam": "IPv4", 00:17:53.286 "traddr": "10.0.0.2", 00:17:53.286 "trsvcid": "4420" 00:17:53.286 }, 00:17:53.286 "peer_address": { 00:17:53.286 "trtype": "TCP", 00:17:53.286 "adrfam": "IPv4", 00:17:53.286 "traddr": "10.0.0.1", 00:17:53.286 "trsvcid": "39494" 00:17:53.286 }, 00:17:53.286 "auth": { 00:17:53.286 "state": "completed", 00:17:53.286 "digest": "sha512", 00:17:53.286 "dhgroup": "null" 00:17:53.286 } 00:17:53.286 } 00:17:53.286 ]' 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.286 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.543 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.543 16:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.543 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.544 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.544 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.802 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:53.802 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.369 16:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.628 00:17:54.629 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.629 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.629 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.888 { 00:17:54.888 "cntlid": 101, 00:17:54.888 "qid": 0, 00:17:54.888 "state": "enabled", 00:17:54.888 "thread": "nvmf_tgt_poll_group_000", 00:17:54.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:54.888 "listen_address": { 00:17:54.888 "trtype": "TCP", 00:17:54.888 "adrfam": "IPv4", 00:17:54.888 "traddr": "10.0.0.2", 00:17:54.888 "trsvcid": "4420" 00:17:54.888 }, 00:17:54.888 "peer_address": { 00:17:54.888 "trtype": "TCP", 00:17:54.888 "adrfam": "IPv4", 00:17:54.888 "traddr": "10.0.0.1", 00:17:54.888 "trsvcid": "39506" 00:17:54.888 }, 00:17:54.888 "auth": { 00:17:54.888 "state": "completed", 00:17:54.888 "digest": "sha512", 00:17:54.888 "dhgroup": "null" 00:17:54.888 } 00:17:54.888 } 00:17:54.888 ]' 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.888 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:55.146 16:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:17:55.713 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.973 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.232 00:17:56.232 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.232 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.232 16:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.490 { 00:17:56.490 "cntlid": 103, 00:17:56.490 "qid": 0, 00:17:56.490 "state": "enabled", 00:17:56.490 "thread": "nvmf_tgt_poll_group_000", 00:17:56.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:56.490 "listen_address": { 00:17:56.490 "trtype": "TCP", 00:17:56.490 "adrfam": "IPv4", 00:17:56.490 "traddr": "10.0.0.2", 00:17:56.490 "trsvcid": "4420" 00:17:56.490 }, 00:17:56.490 "peer_address": { 00:17:56.490 "trtype": "TCP", 00:17:56.490 "adrfam": "IPv4", 00:17:56.490 "traddr": "10.0.0.1", 00:17:56.490 "trsvcid": "39540" 00:17:56.490 }, 00:17:56.490 "auth": { 00:17:56.490 "state": "completed", 00:17:56.490 "digest": "sha512", 00:17:56.490 "dhgroup": "null" 00:17:56.490 } 00:17:56.490 } 00:17:56.490 ]' 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.490 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.749 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.749 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.749 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.749 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:56.749 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.316 16:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.575 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:57.575 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.575 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.575 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.575 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.575 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.576 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.835 00:17:57.835 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.835 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.835 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.094 { 00:17:58.094 "cntlid": 105, 00:17:58.094 "qid": 0, 00:17:58.094 "state": "enabled", 00:17:58.094 "thread": "nvmf_tgt_poll_group_000", 00:17:58.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:58.094 "listen_address": { 00:17:58.094 "trtype": "TCP", 00:17:58.094 "adrfam": "IPv4", 00:17:58.094 "traddr": "10.0.0.2", 00:17:58.094 "trsvcid": "4420" 00:17:58.094 }, 00:17:58.094 "peer_address": { 00:17:58.094 "trtype": "TCP", 00:17:58.094 "adrfam": "IPv4", 00:17:58.094 "traddr": "10.0.0.1", 00:17:58.094 "trsvcid": "39566" 00:17:58.094 }, 00:17:58.094 "auth": { 00:17:58.094 "state": "completed", 00:17:58.094 "digest": "sha512", 00:17:58.094 "dhgroup": "ffdhe2048" 00:17:58.094 } 00:17:58.094 } 00:17:58.094 ]' 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.094 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.352 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.352 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.352 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.352 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:58.352 16:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:17:58.919 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.920 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.204 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.205 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.205 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.205 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.205 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.479 00:17:59.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.479 16:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.764 { 00:17:59.764 "cntlid": 107, 00:17:59.764 "qid": 0, 00:17:59.764 "state": "enabled", 00:17:59.764 "thread": "nvmf_tgt_poll_group_000", 00:17:59.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.764 "listen_address": { 00:17:59.764 "trtype": "TCP", 00:17:59.764 "adrfam": "IPv4", 00:17:59.764 "traddr": "10.0.0.2", 00:17:59.764 "trsvcid": "4420" 00:17:59.764 }, 00:17:59.764 "peer_address": { 00:17:59.764 "trtype": "TCP", 00:17:59.764 "adrfam": "IPv4", 00:17:59.764 "traddr": "10.0.0.1", 00:17:59.764 "trsvcid": "43494" 00:17:59.764 }, 00:17:59.764 "auth": { 00:17:59.764 "state": "completed", 00:17:59.764 "digest": "sha512", 00:17:59.764 "dhgroup": "ffdhe2048" 00:17:59.764 } 00:17:59.764 } 00:17:59.764 ]' 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.764 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.765 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.765 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:59.765 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.765 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.765 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.765 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.022 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:00.023 16:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.588 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.847 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.106 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.106 { 00:18:01.106 "cntlid": 109, 00:18:01.106 "qid": 0, 00:18:01.106 "state": "enabled", 00:18:01.106 "thread": "nvmf_tgt_poll_group_000", 00:18:01.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:01.106 "listen_address": { 00:18:01.106 "trtype": "TCP", 00:18:01.106 "adrfam": "IPv4", 00:18:01.106 "traddr": "10.0.0.2", 00:18:01.106 "trsvcid": "4420" 00:18:01.106 }, 00:18:01.106 "peer_address": { 00:18:01.106 "trtype": "TCP", 00:18:01.106 "adrfam": "IPv4", 00:18:01.106 "traddr": "10.0.0.1", 00:18:01.106 "trsvcid": "43522" 00:18:01.106 }, 00:18:01.106 "auth": { 00:18:01.106 "state": "completed", 00:18:01.106 "digest": "sha512", 00:18:01.106 "dhgroup": "ffdhe2048" 00:18:01.106 } 00:18:01.106 } 00:18:01.106 ]' 00:18:01.106 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.365 16:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.622 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:01.622 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:02.188 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.189 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.448 16:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.448 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.707 { 00:18:02.707 "cntlid": 111, 00:18:02.707 "qid": 0, 00:18:02.707 "state": "enabled", 00:18:02.707 "thread": "nvmf_tgt_poll_group_000", 00:18:02.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.707 "listen_address": { 00:18:02.707 "trtype": "TCP", 00:18:02.707 "adrfam": "IPv4", 00:18:02.707 "traddr": "10.0.0.2", 00:18:02.707 "trsvcid": "4420" 00:18:02.707 }, 00:18:02.707 "peer_address": { 00:18:02.707 "trtype": "TCP", 00:18:02.707 "adrfam": "IPv4", 00:18:02.707 "traddr": "10.0.0.1", 00:18:02.707 "trsvcid": "43550" 00:18:02.707 }, 00:18:02.707 "auth": { 00:18:02.707 "state": "completed", 00:18:02.707 "digest": "sha512", 00:18:02.707 "dhgroup": "ffdhe2048" 00:18:02.707 } 00:18:02.707 } 00:18:02.707 ]' 00:18:02.707 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.966 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.224 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:03.224 16:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.802 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.061 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.319 { 00:18:04.319 "cntlid": 113, 00:18:04.319 "qid": 0, 00:18:04.319 "state": "enabled", 00:18:04.319 "thread": "nvmf_tgt_poll_group_000", 00:18:04.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.319 "listen_address": { 00:18:04.319 "trtype": "TCP", 00:18:04.319 "adrfam": "IPv4", 00:18:04.319 "traddr": "10.0.0.2", 00:18:04.319 "trsvcid": "4420" 00:18:04.319 }, 00:18:04.319 "peer_address": { 00:18:04.319 "trtype": "TCP", 00:18:04.319 "adrfam": "IPv4", 00:18:04.319 "traddr": "10.0.0.1", 00:18:04.319 "trsvcid": "43582" 00:18:04.319 }, 00:18:04.319 "auth": { 00:18:04.319 "state": "completed", 00:18:04.319 "digest": "sha512", 00:18:04.319 "dhgroup": "ffdhe3072" 00:18:04.319 } 00:18:04.319 } 00:18:04.319 ]' 00:18:04.319 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.578 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.578 16:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.578 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.837 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:04.837 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.402 16:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.402 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.660 00:18:05.660 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.660 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.661 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.919 { 00:18:05.919 "cntlid": 115, 00:18:05.919 "qid": 0, 00:18:05.919 "state": "enabled", 00:18:05.919 "thread": "nvmf_tgt_poll_group_000", 00:18:05.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.919 "listen_address": { 00:18:05.919 "trtype": "TCP", 00:18:05.919 "adrfam": "IPv4", 00:18:05.919 "traddr": "10.0.0.2", 00:18:05.919 "trsvcid": "4420" 00:18:05.919 }, 00:18:05.919 "peer_address": { 00:18:05.919 "trtype": "TCP", 00:18:05.919 "adrfam": "IPv4", 00:18:05.919 "traddr": "10.0.0.1", 00:18:05.919 "trsvcid": "43610" 00:18:05.919 }, 00:18:05.919 "auth": { 00:18:05.919 "state": "completed", 00:18:05.919 "digest": "sha512", 00:18:05.919 "dhgroup": "ffdhe3072" 00:18:05.919 } 00:18:05.919 } 00:18:05.919 ]' 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.919 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.178 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.178 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.178 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.178 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.178 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.436 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:06.437 16:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.003 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.262 00:18:07.262 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.262 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.262 16:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.521 { 00:18:07.521 "cntlid": 117, 00:18:07.521 "qid": 0, 00:18:07.521 "state": "enabled", 00:18:07.521 "thread": "nvmf_tgt_poll_group_000", 00:18:07.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.521 "listen_address": { 00:18:07.521 "trtype": "TCP", 00:18:07.521 "adrfam": "IPv4", 00:18:07.521 "traddr": "10.0.0.2", 00:18:07.521 "trsvcid": "4420" 00:18:07.521 }, 00:18:07.521 "peer_address": { 00:18:07.521 "trtype": "TCP", 00:18:07.521 "adrfam": "IPv4", 00:18:07.521 "traddr": "10.0.0.1", 00:18:07.521 "trsvcid": "43644" 00:18:07.521 }, 00:18:07.521 "auth": { 00:18:07.521 "state": "completed", 00:18:07.521 "digest": "sha512", 00:18:07.521 "dhgroup": "ffdhe3072" 00:18:07.521 } 00:18:07.521 } 00:18:07.521 ]' 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.521 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.780 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.780 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.780 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.780 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:07.780 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.346 16:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.604 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.862 00:18:08.862 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.862 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.862 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.120 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.120 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.120 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.120 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.120 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.120 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.120 { 00:18:09.120 "cntlid": 119, 00:18:09.120 "qid": 0, 00:18:09.120 "state": "enabled", 00:18:09.121 "thread": "nvmf_tgt_poll_group_000", 00:18:09.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.121 "listen_address": { 00:18:09.121 "trtype": "TCP", 00:18:09.121 "adrfam": "IPv4", 00:18:09.121 "traddr": "10.0.0.2", 00:18:09.121 "trsvcid": "4420" 00:18:09.121 }, 00:18:09.121 "peer_address": { 00:18:09.121 "trtype": "TCP", 00:18:09.121 "adrfam": "IPv4", 00:18:09.121 "traddr": "10.0.0.1", 00:18:09.121 "trsvcid": "43674" 00:18:09.121 }, 00:18:09.121 "auth": { 00:18:09.121 "state": "completed", 00:18:09.121 "digest": "sha512", 00:18:09.121 "dhgroup": "ffdhe3072" 00:18:09.121 } 00:18:09.121 } 00:18:09.121 ]' 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.121 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.379 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:09.379 16:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.945 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.203 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.461 00:18:10.461 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.461 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.461 16:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.720 { 00:18:10.720 "cntlid": 121, 00:18:10.720 "qid": 0, 00:18:10.720 "state": "enabled", 00:18:10.720 "thread": "nvmf_tgt_poll_group_000", 00:18:10.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.720 "listen_address": { 00:18:10.720 "trtype": "TCP", 00:18:10.720 "adrfam": "IPv4", 00:18:10.720 "traddr": "10.0.0.2", 00:18:10.720 "trsvcid": "4420" 00:18:10.720 }, 00:18:10.720 "peer_address": { 00:18:10.720 "trtype": "TCP", 00:18:10.720 "adrfam": "IPv4", 00:18:10.720 "traddr": "10.0.0.1", 00:18:10.720 "trsvcid": "42496" 00:18:10.720 }, 00:18:10.720 "auth": { 00:18:10.720 "state": "completed", 00:18:10.720 "digest": "sha512", 00:18:10.720 "dhgroup": "ffdhe4096" 00:18:10.720 } 00:18:10.720 } 00:18:10.720 ]' 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.720 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.978 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:10.979 16:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.544 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.802 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.060 00:18:12.060 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.060 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.060 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.318 { 00:18:12.318 "cntlid": 123, 00:18:12.318 "qid": 0, 00:18:12.318 "state": "enabled", 00:18:12.318 "thread": "nvmf_tgt_poll_group_000", 00:18:12.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.318 "listen_address": { 00:18:12.318 "trtype": "TCP", 00:18:12.318 "adrfam": "IPv4", 00:18:12.318 "traddr": "10.0.0.2", 00:18:12.318 "trsvcid": "4420" 00:18:12.318 }, 00:18:12.318 "peer_address": { 00:18:12.318 "trtype": "TCP", 00:18:12.318 "adrfam": "IPv4", 00:18:12.318 "traddr": "10.0.0.1", 00:18:12.318 "trsvcid": "42526" 00:18:12.318 }, 00:18:12.318 "auth": { 00:18:12.318 "state": "completed", 00:18:12.318 "digest": "sha512", 00:18:12.318 "dhgroup": "ffdhe4096" 00:18:12.318 } 00:18:12.318 } 00:18:12.318 ]' 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.318 16:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.577 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:12.577 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.143 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.401 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.402 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.402 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.402 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.402 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.402 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.402 16:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.660 00:18:13.661 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.661 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.661 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.919 { 00:18:13.919 "cntlid": 125, 00:18:13.919 "qid": 0, 00:18:13.919 "state": "enabled", 00:18:13.919 "thread": "nvmf_tgt_poll_group_000", 00:18:13.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:13.919 "listen_address": { 00:18:13.919 "trtype": "TCP", 00:18:13.919 "adrfam": "IPv4", 00:18:13.919 "traddr": "10.0.0.2", 00:18:13.919 "trsvcid": "4420" 00:18:13.919 }, 00:18:13.919 "peer_address": { 00:18:13.919 "trtype": "TCP", 00:18:13.919 "adrfam": "IPv4", 00:18:13.919 "traddr": "10.0.0.1", 00:18:13.919 "trsvcid": "42560" 00:18:13.919 }, 00:18:13.919 "auth": { 00:18:13.919 "state": "completed", 00:18:13.919 "digest": "sha512", 00:18:13.919 "dhgroup": "ffdhe4096" 00:18:13.919 } 00:18:13.919 } 00:18:13.919 ]' 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.919 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.177 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:14.177 16:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.745 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.003 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.262 00:18:15.262 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.262 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.262 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.521 { 00:18:15.521 "cntlid": 127, 00:18:15.521 "qid": 0, 00:18:15.521 "state": "enabled", 00:18:15.521 "thread": "nvmf_tgt_poll_group_000", 00:18:15.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:15.521 "listen_address": { 00:18:15.521 "trtype": "TCP", 00:18:15.521 "adrfam": "IPv4", 00:18:15.521 "traddr": "10.0.0.2", 00:18:15.521 "trsvcid": "4420" 00:18:15.521 }, 00:18:15.521 "peer_address": { 00:18:15.521 "trtype": "TCP", 00:18:15.521 "adrfam": "IPv4", 00:18:15.521 "traddr": "10.0.0.1", 00:18:15.521 "trsvcid": "42596" 00:18:15.521 }, 00:18:15.521 "auth": { 00:18:15.521 "state": "completed", 00:18:15.521 "digest": "sha512", 00:18:15.521 "dhgroup": "ffdhe4096" 00:18:15.521 } 00:18:15.521 } 00:18:15.521 ]' 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.521 16:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.521 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.521 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.521 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.521 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.521 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.780 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:15.780 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.346 16:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.604 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.862 00:18:16.862 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.862 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.863 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.121 { 00:18:17.121 "cntlid": 129, 00:18:17.121 "qid": 0, 00:18:17.121 "state": "enabled", 00:18:17.121 "thread": "nvmf_tgt_poll_group_000", 00:18:17.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:17.121 "listen_address": { 00:18:17.121 "trtype": "TCP", 00:18:17.121 "adrfam": "IPv4", 00:18:17.121 "traddr": "10.0.0.2", 00:18:17.121 "trsvcid": "4420" 00:18:17.121 }, 00:18:17.121 "peer_address": { 00:18:17.121 "trtype": "TCP", 00:18:17.121 "adrfam": "IPv4", 00:18:17.121 "traddr": "10.0.0.1", 00:18:17.121 "trsvcid": "42618" 00:18:17.121 }, 00:18:17.121 "auth": { 00:18:17.121 "state": "completed", 00:18:17.121 "digest": "sha512", 00:18:17.121 "dhgroup": "ffdhe6144" 00:18:17.121 } 00:18:17.121 } 00:18:17.121 ]' 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.121 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.380 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.380 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.380 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.380 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:17.380 16:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.947 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.205 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.206 16:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.774 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.774 { 00:18:18.774 "cntlid": 131, 00:18:18.774 "qid": 0, 00:18:18.774 "state": "enabled", 00:18:18.774 "thread": "nvmf_tgt_poll_group_000", 00:18:18.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:18.774 "listen_address": { 00:18:18.774 "trtype": "TCP", 00:18:18.774 "adrfam": "IPv4", 00:18:18.774 "traddr": "10.0.0.2", 00:18:18.774 "trsvcid": "4420" 00:18:18.774 }, 00:18:18.774 "peer_address": { 00:18:18.774 "trtype": "TCP", 00:18:18.774 "adrfam": "IPv4", 00:18:18.774 "traddr": "10.0.0.1", 00:18:18.774 "trsvcid": "42642" 00:18:18.774 }, 00:18:18.774 "auth": { 00:18:18.774 "state": "completed", 00:18:18.774 "digest": "sha512", 00:18:18.774 "dhgroup": "ffdhe6144" 00:18:18.774 } 00:18:18.774 } 00:18:18.774 ]' 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.774 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:19.031 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.966 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.224 00:18:20.224 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.224 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.224 16:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.484 { 00:18:20.484 "cntlid": 133, 00:18:20.484 "qid": 0, 00:18:20.484 "state": "enabled", 00:18:20.484 "thread": "nvmf_tgt_poll_group_000", 00:18:20.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.484 "listen_address": { 00:18:20.484 "trtype": "TCP", 00:18:20.484 "adrfam": "IPv4", 00:18:20.484 "traddr": "10.0.0.2", 00:18:20.484 "trsvcid": "4420" 00:18:20.484 }, 00:18:20.484 "peer_address": { 00:18:20.484 "trtype": "TCP", 00:18:20.484 "adrfam": "IPv4", 00:18:20.484 "traddr": "10.0.0.1", 00:18:20.484 "trsvcid": "60690" 00:18:20.484 }, 00:18:20.484 "auth": { 00:18:20.484 "state": "completed", 00:18:20.484 "digest": "sha512", 00:18:20.484 "dhgroup": "ffdhe6144" 00:18:20.484 } 00:18:20.484 } 00:18:20.484 ]' 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.484 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.743 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.743 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.743 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.743 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:20.743 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:21.310 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.310 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.310 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.310 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.310 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.311 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.311 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.311 16:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.569 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.828 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.086 { 00:18:22.086 "cntlid": 135, 00:18:22.086 "qid": 0, 00:18:22.086 "state": "enabled", 00:18:22.086 "thread": "nvmf_tgt_poll_group_000", 00:18:22.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.086 "listen_address": { 00:18:22.086 "trtype": "TCP", 00:18:22.086 "adrfam": "IPv4", 00:18:22.086 "traddr": "10.0.0.2", 00:18:22.086 "trsvcid": "4420" 00:18:22.086 }, 00:18:22.086 "peer_address": { 00:18:22.086 "trtype": "TCP", 00:18:22.086 "adrfam": "IPv4", 00:18:22.086 "traddr": "10.0.0.1", 00:18:22.086 "trsvcid": "60710" 00:18:22.086 }, 00:18:22.086 "auth": { 00:18:22.086 "state": "completed", 00:18:22.086 "digest": "sha512", 00:18:22.086 "dhgroup": "ffdhe6144" 00:18:22.086 } 00:18:22.086 } 00:18:22.086 ]' 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.086 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.345 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.345 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.345 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.345 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.345 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.604 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:22.604 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.171 16:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.739 00:18:23.739 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.739 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.739 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.998 { 00:18:23.998 "cntlid": 137, 00:18:23.998 "qid": 0, 00:18:23.998 "state": "enabled", 00:18:23.998 "thread": "nvmf_tgt_poll_group_000", 00:18:23.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:23.998 "listen_address": { 00:18:23.998 "trtype": "TCP", 00:18:23.998 "adrfam": "IPv4", 00:18:23.998 "traddr": "10.0.0.2", 00:18:23.998 "trsvcid": "4420" 00:18:23.998 }, 00:18:23.998 "peer_address": { 00:18:23.998 "trtype": "TCP", 00:18:23.998 "adrfam": "IPv4", 00:18:23.998 "traddr": "10.0.0.1", 00:18:23.998 "trsvcid": "60738" 00:18:23.998 }, 00:18:23.998 "auth": { 00:18:23.998 "state": "completed", 00:18:23.998 "digest": "sha512", 00:18:23.998 "dhgroup": "ffdhe8192" 00:18:23.998 } 00:18:23.998 } 00:18:23.998 ]' 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.998 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.256 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.256 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.256 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.256 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:24.256 16:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.825 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.083 16:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.649 00:18:25.649 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.649 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.649 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.907 { 00:18:25.907 "cntlid": 139, 00:18:25.907 "qid": 0, 00:18:25.907 "state": "enabled", 00:18:25.907 "thread": "nvmf_tgt_poll_group_000", 00:18:25.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.907 "listen_address": { 00:18:25.907 "trtype": "TCP", 00:18:25.907 "adrfam": "IPv4", 00:18:25.907 "traddr": "10.0.0.2", 00:18:25.907 "trsvcid": "4420" 00:18:25.907 }, 00:18:25.907 "peer_address": { 00:18:25.907 "trtype": "TCP", 00:18:25.907 "adrfam": "IPv4", 00:18:25.907 "traddr": "10.0.0.1", 00:18:25.907 "trsvcid": "60772" 00:18:25.907 }, 00:18:25.907 "auth": { 00:18:25.907 "state": "completed", 00:18:25.907 "digest": "sha512", 00:18:25.907 "dhgroup": "ffdhe8192" 00:18:25.907 } 00:18:25.907 } 00:18:25.907 ]' 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.907 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.165 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:26.165 16:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: --dhchap-ctrl-secret DHHC-1:02:OTBkMWZkN2VmZThhZjgwYTdkYWIwZGNkZWFjN2ZmOTcyN2Y3NDQyMWNmYmY0MzhiUc6fbA==: 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.732 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.992 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.251 00:18:27.510 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.510 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.510 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.510 { 00:18:27.510 "cntlid": 141, 00:18:27.510 "qid": 0, 00:18:27.510 "state": "enabled", 00:18:27.510 "thread": "nvmf_tgt_poll_group_000", 00:18:27.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:27.510 "listen_address": { 00:18:27.510 "trtype": "TCP", 00:18:27.510 "adrfam": "IPv4", 00:18:27.510 "traddr": "10.0.0.2", 00:18:27.510 "trsvcid": "4420" 00:18:27.510 }, 00:18:27.510 "peer_address": { 00:18:27.510 "trtype": "TCP", 00:18:27.510 "adrfam": "IPv4", 00:18:27.510 "traddr": "10.0.0.1", 00:18:27.510 "trsvcid": "60800" 00:18:27.510 }, 00:18:27.510 "auth": { 00:18:27.510 "state": "completed", 00:18:27.510 "digest": "sha512", 00:18:27.510 "dhgroup": "ffdhe8192" 00:18:27.510 } 00:18:27.510 } 00:18:27.510 ]' 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.510 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:27.769 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:01:ODk2NzM0NDgxZjBlNGIwNjU4NjAzODNkZTM2N2VjMDJM6/rN: 00:18:28.336 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.336 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.336 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.336 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.594 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.594 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.594 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.594 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.594 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:28.594 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.594 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.594 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.594 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:28.594 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:28.595 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.162 00:18:29.162 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.162 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.162 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.421 { 00:18:29.421 "cntlid": 143, 00:18:29.421 "qid": 0, 00:18:29.421 "state": "enabled", 00:18:29.421 "thread": "nvmf_tgt_poll_group_000", 00:18:29.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:29.421 "listen_address": { 00:18:29.421 "trtype": "TCP", 00:18:29.421 "adrfam": "IPv4", 00:18:29.421 "traddr": "10.0.0.2", 00:18:29.421 "trsvcid": "4420" 00:18:29.421 }, 00:18:29.421 "peer_address": { 00:18:29.421 "trtype": "TCP", 00:18:29.421 "adrfam": "IPv4", 00:18:29.421 "traddr": "10.0.0.1", 00:18:29.421 "trsvcid": "60820" 00:18:29.421 }, 00:18:29.421 "auth": { 00:18:29.421 "state": "completed", 00:18:29.421 "digest": "sha512", 00:18:29.421 "dhgroup": "ffdhe8192" 00:18:29.421 } 00:18:29.421 } 00:18:29.421 ]' 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.421 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.679 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:29.679 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.246 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.505 16:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.072 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.072 { 00:18:31.072 "cntlid": 145, 00:18:31.072 "qid": 0, 00:18:31.072 "state": "enabled", 00:18:31.072 "thread": "nvmf_tgt_poll_group_000", 00:18:31.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:31.072 "listen_address": { 00:18:31.072 "trtype": "TCP", 00:18:31.072 "adrfam": "IPv4", 00:18:31.072 "traddr": "10.0.0.2", 00:18:31.072 "trsvcid": "4420" 00:18:31.072 }, 00:18:31.072 "peer_address": { 00:18:31.072 "trtype": "TCP", 00:18:31.072 "adrfam": "IPv4", 00:18:31.072 "traddr": "10.0.0.1", 00:18:31.072 "trsvcid": "33858" 00:18:31.072 }, 00:18:31.072 "auth": { 00:18:31.072 "state": "completed", 00:18:31.072 "digest": "sha512", 00:18:31.072 "dhgroup": "ffdhe8192" 00:18:31.072 } 00:18:31.072 } 00:18:31.072 ]' 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.072 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.331 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.331 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.331 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.331 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.331 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.590 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:31.590 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZDgwNjA1MmFkOGI5ZjIxMDVmMjFhNjUyYTI2M2ZiNDM2ZjA1YTk0NDNiYTc4YjMyQDQGTw==: --dhchap-ctrl-secret DHHC-1:03:ZjBkMTM3OTI4MWY3NDE4YjVhNmRhYmJkMDE4ZGNlYWM1M2JlZWMwNTQ4NDNiMjc2MzY5ZjA5ZmVhMGY2YjQyMqDaJ3w=: 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:32.159 16:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:32.418 request: 00:18:32.418 { 00:18:32.418 "name": "nvme0", 00:18:32.418 "trtype": "tcp", 00:18:32.418 "traddr": "10.0.0.2", 00:18:32.418 "adrfam": "ipv4", 00:18:32.418 "trsvcid": "4420", 00:18:32.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:32.418 "prchk_reftag": false, 00:18:32.418 "prchk_guard": false, 00:18:32.418 "hdgst": false, 00:18:32.418 "ddgst": false, 00:18:32.418 "dhchap_key": "key2", 00:18:32.418 "allow_unrecognized_csi": false, 00:18:32.418 "method": "bdev_nvme_attach_controller", 00:18:32.418 "req_id": 1 00:18:32.418 } 00:18:32.418 Got JSON-RPC error response 00:18:32.418 response: 00:18:32.418 { 00:18:32.418 "code": -5, 00:18:32.418 "message": "Input/output error" 00:18:32.418 } 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.418 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.986 request: 00:18:32.986 { 00:18:32.986 "name": "nvme0", 00:18:32.986 "trtype": "tcp", 00:18:32.986 "traddr": "10.0.0.2", 00:18:32.986 "adrfam": "ipv4", 00:18:32.986 "trsvcid": "4420", 00:18:32.986 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:32.986 "prchk_reftag": false, 00:18:32.986 "prchk_guard": false, 00:18:32.986 "hdgst": false, 00:18:32.986 "ddgst": false, 00:18:32.986 "dhchap_key": "key1", 00:18:32.986 "dhchap_ctrlr_key": "ckey2", 00:18:32.986 "allow_unrecognized_csi": false, 00:18:32.986 "method": "bdev_nvme_attach_controller", 00:18:32.986 "req_id": 1 00:18:32.986 } 00:18:32.986 Got JSON-RPC error response 00:18:32.986 response: 00:18:32.986 { 00:18:32.986 "code": -5, 00:18:32.986 "message": "Input/output error" 00:18:32.986 } 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.986 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.554 request: 00:18:33.554 { 00:18:33.554 "name": "nvme0", 00:18:33.554 "trtype": "tcp", 00:18:33.554 "traddr": "10.0.0.2", 00:18:33.554 "adrfam": "ipv4", 00:18:33.554 "trsvcid": "4420", 00:18:33.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:33.554 "prchk_reftag": false, 00:18:33.554 "prchk_guard": false, 00:18:33.554 "hdgst": false, 00:18:33.554 "ddgst": false, 00:18:33.554 "dhchap_key": "key1", 00:18:33.554 "dhchap_ctrlr_key": "ckey1", 00:18:33.554 "allow_unrecognized_csi": false, 00:18:33.554 "method": "bdev_nvme_attach_controller", 00:18:33.554 "req_id": 1 00:18:33.555 } 00:18:33.555 Got JSON-RPC error response 00:18:33.555 response: 00:18:33.555 { 00:18:33.555 "code": -5, 00:18:33.555 "message": "Input/output error" 00:18:33.555 } 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 524201 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 524201 ']' 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 524201 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.555 16:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 524201 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 524201' 00:18:33.555 killing process with pid 524201 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 524201 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 524201 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=546194 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 546194 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 546194 ']' 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.555 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 546194 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 546194 ']' 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.814 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.073 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.073 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:34.073 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:34.073 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.073 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 null0 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cZO 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.DSc ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DSc 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BHL 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1k4 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1k4 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.sOd 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.YH7 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.YH7 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RuL 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.332 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.333 16:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.268 nvme0n1 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.268 { 00:18:35.268 "cntlid": 1, 00:18:35.268 "qid": 0, 00:18:35.268 "state": "enabled", 00:18:35.268 "thread": "nvmf_tgt_poll_group_000", 00:18:35.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:35.268 "listen_address": { 00:18:35.268 "trtype": "TCP", 00:18:35.268 "adrfam": "IPv4", 00:18:35.268 "traddr": "10.0.0.2", 00:18:35.268 "trsvcid": "4420" 00:18:35.268 }, 00:18:35.268 "peer_address": { 00:18:35.268 "trtype": "TCP", 00:18:35.268 "adrfam": "IPv4", 00:18:35.268 "traddr": "10.0.0.1", 00:18:35.268 "trsvcid": "33906" 00:18:35.268 }, 00:18:35.268 "auth": { 00:18:35.268 "state": "completed", 00:18:35.268 "digest": "sha512", 00:18:35.268 "dhgroup": "ffdhe8192" 00:18:35.268 } 00:18:35.268 } 00:18:35.268 ]' 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.268 16:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.527 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:35.527 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:36.093 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.351 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.352 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.611 request: 00:18:36.611 { 00:18:36.611 "name": "nvme0", 00:18:36.611 "trtype": "tcp", 00:18:36.611 "traddr": "10.0.0.2", 00:18:36.611 "adrfam": "ipv4", 00:18:36.611 "trsvcid": "4420", 00:18:36.611 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:36.611 "prchk_reftag": false, 00:18:36.611 "prchk_guard": false, 00:18:36.611 "hdgst": false, 00:18:36.611 "ddgst": false, 00:18:36.611 "dhchap_key": "key3", 00:18:36.611 "allow_unrecognized_csi": false, 00:18:36.611 "method": "bdev_nvme_attach_controller", 00:18:36.611 "req_id": 1 00:18:36.611 } 00:18:36.611 Got JSON-RPC error response 00:18:36.611 response: 00:18:36.611 { 00:18:36.611 "code": -5, 00:18:36.611 "message": "Input/output error" 00:18:36.611 } 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:36.611 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.921 request: 00:18:36.921 { 00:18:36.921 "name": "nvme0", 00:18:36.921 "trtype": "tcp", 00:18:36.921 "traddr": "10.0.0.2", 00:18:36.921 "adrfam": "ipv4", 00:18:36.921 "trsvcid": "4420", 00:18:36.921 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:36.921 "prchk_reftag": false, 00:18:36.921 "prchk_guard": false, 00:18:36.921 "hdgst": false, 00:18:36.921 "ddgst": false, 00:18:36.921 "dhchap_key": "key3", 00:18:36.921 "allow_unrecognized_csi": false, 00:18:36.921 "method": "bdev_nvme_attach_controller", 00:18:36.921 "req_id": 1 00:18:36.921 } 00:18:36.921 Got JSON-RPC error response 00:18:36.921 response: 00:18:36.921 { 00:18:36.921 "code": -5, 00:18:36.921 "message": "Input/output error" 00:18:36.921 } 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.921 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.207 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.466 request: 00:18:37.466 { 00:18:37.466 "name": "nvme0", 00:18:37.466 "trtype": "tcp", 00:18:37.466 "traddr": "10.0.0.2", 00:18:37.466 "adrfam": "ipv4", 00:18:37.466 "trsvcid": "4420", 00:18:37.466 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:37.466 "prchk_reftag": false, 00:18:37.466 "prchk_guard": false, 00:18:37.466 "hdgst": false, 00:18:37.466 "ddgst": false, 00:18:37.466 "dhchap_key": "key0", 00:18:37.466 "dhchap_ctrlr_key": "key1", 00:18:37.466 "allow_unrecognized_csi": false, 00:18:37.466 "method": "bdev_nvme_attach_controller", 00:18:37.466 "req_id": 1 00:18:37.466 } 00:18:37.466 Got JSON-RPC error response 00:18:37.466 response: 00:18:37.466 { 00:18:37.466 "code": -5, 00:18:37.466 "message": "Input/output error" 00:18:37.466 } 00:18:37.466 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:37.466 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.466 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.466 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.724 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:37.724 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:37.724 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:37.724 nvme0n1 00:18:37.724 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:37.724 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:37.724 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.982 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.982 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.982 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:38.240 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:39.178 nvme0n1 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:39.178 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.437 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.437 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:39.437 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: --dhchap-ctrl-secret DHHC-1:03:MWViNzcxYWVjZWFkZTZkZTdhMTc4NDA0MWZkMjJiNTUzMDkzZmQ0YjEzN2U4YWQ5MTY4M2ZmMzRjNTQ4ZTMwMAVSuik=: 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:40.004 16:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:40.572 request: 00:18:40.572 { 00:18:40.572 "name": "nvme0", 00:18:40.572 "trtype": "tcp", 00:18:40.572 "traddr": "10.0.0.2", 00:18:40.572 "adrfam": "ipv4", 00:18:40.572 "trsvcid": "4420", 00:18:40.572 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:40.572 "prchk_reftag": false, 00:18:40.572 "prchk_guard": false, 00:18:40.572 "hdgst": false, 00:18:40.572 "ddgst": false, 00:18:40.572 "dhchap_key": "key1", 00:18:40.572 "allow_unrecognized_csi": false, 00:18:40.572 "method": "bdev_nvme_attach_controller", 00:18:40.572 "req_id": 1 00:18:40.572 } 00:18:40.572 Got JSON-RPC error response 00:18:40.572 response: 00:18:40.572 { 00:18:40.572 "code": -5, 00:18:40.572 "message": "Input/output error" 00:18:40.572 } 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:40.572 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.506 nvme0n1 00:18:41.506 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:41.506 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:41.506 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.506 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.506 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.506 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:41.765 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:42.023 nvme0n1 00:18:42.023 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:42.023 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:42.023 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.023 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.023 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.023 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.281 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:42.281 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: '' 2s 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: ]] 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWE5YTAxMDA4NjdkYmMzNmEwMDgzZTliYjg4YzQ4ZGH48sMJ: 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:42.282 16:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: 2s 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:44.814 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: ]] 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTgyM2I4NmRhMDcwNmVmOWQ4ZDBkNGIxNTZmOGJhNmNlNzJlZTg1ZGE2ODVlMTI53HCf5A==: 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:44.815 16:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.718 16:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:47.285 nvme0n1 00:18:47.285 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.285 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.285 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.285 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.285 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.285 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:47.545 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:47.545 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:47.545 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:47.803 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:48.062 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:48.062 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:48.062 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.320 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.321 16:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:48.887 request: 00:18:48.887 { 00:18:48.887 "name": "nvme0", 00:18:48.887 "dhchap_key": "key1", 00:18:48.887 "dhchap_ctrlr_key": "key3", 00:18:48.887 "method": "bdev_nvme_set_keys", 00:18:48.887 "req_id": 1 00:18:48.887 } 00:18:48.887 Got JSON-RPC error response 00:18:48.887 response: 00:18:48.887 { 00:18:48.887 "code": -13, 00:18:48.887 "message": "Permission denied" 00:18:48.887 } 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:48.887 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:49.824 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:49.824 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:49.824 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:50.082 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:51.018 nvme0n1 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:51.018 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:51.278 request: 00:18:51.278 { 00:18:51.278 "name": "nvme0", 00:18:51.278 "dhchap_key": "key2", 00:18:51.278 "dhchap_ctrlr_key": "key0", 00:18:51.278 "method": "bdev_nvme_set_keys", 00:18:51.278 "req_id": 1 00:18:51.278 } 00:18:51.278 Got JSON-RPC error response 00:18:51.278 response: 00:18:51.278 { 00:18:51.278 "code": -13, 00:18:51.278 "message": "Permission denied" 00:18:51.278 } 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:51.278 16:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.536 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:51.536 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 524221 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 524221 ']' 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 524221 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 524221 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 524221' 00:18:52.914 killing process with pid 524221 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 524221 00:18:52.914 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 524221 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:53.173 rmmod nvme_tcp 00:18:53.173 rmmod nvme_fabrics 00:18:53.173 rmmod nvme_keyring 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 546194 ']' 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 546194 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 546194 ']' 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 546194 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 546194 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 546194' 00:18:53.173 killing process with pid 546194 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 546194 00:18:53.173 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 546194 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.432 16:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.970 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:55.970 16:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cZO /tmp/spdk.key-sha256.BHL /tmp/spdk.key-sha384.sOd /tmp/spdk.key-sha512.RuL /tmp/spdk.key-sha512.DSc /tmp/spdk.key-sha384.1k4 /tmp/spdk.key-sha256.YH7 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:55.970 00:18:55.970 real 2m31.618s 00:18:55.970 user 5m49.464s 00:18:55.970 sys 0m24.241s 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.970 ************************************ 00:18:55.970 END TEST nvmf_auth_target 00:18:55.970 ************************************ 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.970 ************************************ 00:18:55.970 START TEST nvmf_bdevio_no_huge 00:18:55.970 ************************************ 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:55.970 * Looking for test storage... 00:18:55.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:55.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.971 --rc genhtml_branch_coverage=1 00:18:55.971 --rc genhtml_function_coverage=1 00:18:55.971 --rc genhtml_legend=1 00:18:55.971 --rc geninfo_all_blocks=1 00:18:55.971 --rc geninfo_unexecuted_blocks=1 00:18:55.971 00:18:55.971 ' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:55.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.971 --rc genhtml_branch_coverage=1 00:18:55.971 --rc genhtml_function_coverage=1 00:18:55.971 --rc genhtml_legend=1 00:18:55.971 --rc geninfo_all_blocks=1 00:18:55.971 --rc geninfo_unexecuted_blocks=1 00:18:55.971 00:18:55.971 ' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:55.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.971 --rc genhtml_branch_coverage=1 00:18:55.971 --rc genhtml_function_coverage=1 00:18:55.971 --rc genhtml_legend=1 00:18:55.971 --rc geninfo_all_blocks=1 00:18:55.971 --rc geninfo_unexecuted_blocks=1 00:18:55.971 00:18:55.971 ' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:55.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.971 --rc genhtml_branch_coverage=1 00:18:55.971 --rc genhtml_function_coverage=1 00:18:55.971 --rc genhtml_legend=1 00:18:55.971 --rc geninfo_all_blocks=1 00:18:55.971 --rc geninfo_unexecuted_blocks=1 00:18:55.971 00:18:55.971 ' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:55.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:55.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.539 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:02.540 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:02.540 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:02.540 Found net devices under 0000:86:00.0: cvl_0_0 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:02.540 Found net devices under 0000:86:00.1: cvl_0_1 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.540 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.461 ms 00:19:02.540 00:19:02.540 --- 10.0.0.2 ping statistics --- 00:19:02.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.540 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:19:02.540 00:19:02.540 --- 10.0.0.1 ping statistics --- 00:19:02.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.540 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=552992 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 552992 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 552992 ']' 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.540 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.540 [2024-10-14 16:44:06.267995] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:02.540 [2024-10-14 16:44:06.268044] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:02.540 [2024-10-14 16:44:06.345678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.540 [2024-10-14 16:44:06.393098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.540 [2024-10-14 16:44:06.393132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.540 [2024-10-14 16:44:06.393139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.540 [2024-10-14 16:44:06.393145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.541 [2024-10-14 16:44:06.393151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.541 [2024-10-14 16:44:06.394242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:02.541 [2024-10-14 16:44:06.394352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:02.541 [2024-10-14 16:44:06.394459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.541 [2024-10-14 16:44:06.394459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 [2024-10-14 16:44:06.542822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 Malloc0 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 [2024-10-14 16:44:06.587103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:19:02.541 { 00:19:02.541 "params": { 00:19:02.541 "name": "Nvme$subsystem", 00:19:02.541 "trtype": "$TEST_TRANSPORT", 00:19:02.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:02.541 "adrfam": "ipv4", 00:19:02.541 "trsvcid": "$NVMF_PORT", 00:19:02.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:02.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:02.541 "hdgst": ${hdgst:-false}, 00:19:02.541 "ddgst": ${ddgst:-false} 00:19:02.541 }, 00:19:02.541 "method": "bdev_nvme_attach_controller" 00:19:02.541 } 00:19:02.541 EOF 00:19:02.541 )") 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:19:02.541 16:44:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:19:02.541 "params": { 00:19:02.541 "name": "Nvme1", 00:19:02.541 "trtype": "tcp", 00:19:02.541 "traddr": "10.0.0.2", 00:19:02.541 "adrfam": "ipv4", 00:19:02.541 "trsvcid": "4420", 00:19:02.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.541 "hdgst": false, 00:19:02.541 "ddgst": false 00:19:02.541 }, 00:19:02.541 "method": "bdev_nvme_attach_controller" 00:19:02.541 }' 00:19:02.541 [2024-10-14 16:44:06.637486] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:02.541 [2024-10-14 16:44:06.637531] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid553104 ] 00:19:02.541 [2024-10-14 16:44:06.708921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.541 [2024-10-14 16:44:06.756929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.541 [2024-10-14 16:44:06.757034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.541 [2024-10-14 16:44:06.757034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.541 I/O targets: 00:19:02.541 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:02.541 00:19:02.541 00:19:02.541 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.541 http://cunit.sourceforge.net/ 00:19:02.541 00:19:02.541 00:19:02.541 Suite: bdevio tests on: Nvme1n1 00:19:02.541 Test: blockdev write read block ...passed 00:19:02.800 Test: blockdev write zeroes read block ...passed 00:19:02.800 Test: blockdev write zeroes read no split ...passed 00:19:02.800 Test: blockdev write zeroes read split ...passed 00:19:02.800 Test: blockdev write zeroes read split partial ...passed 00:19:02.800 Test: blockdev reset ...[2024-10-14 16:44:07.209248] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.800 [2024-10-14 16:44:07.209312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1051a20 (9): Bad file descriptor 00:19:02.800 [2024-10-14 16:44:07.265431] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:02.800 passed 00:19:02.800 Test: blockdev write read 8 blocks ...passed 00:19:02.800 Test: blockdev write read size > 128k ...passed 00:19:02.800 Test: blockdev write read invalid size ...passed 00:19:02.800 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.800 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.800 Test: blockdev write read max offset ...passed 00:19:02.800 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.059 Test: blockdev writev readv 8 blocks ...passed 00:19:03.059 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.059 Test: blockdev writev readv block ...passed 00:19:03.059 Test: blockdev writev readv size > 128k ...passed 00:19:03.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.059 Test: blockdev comparev and writev ...[2024-10-14 16:44:07.516532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.059 [2024-10-14 16:44:07.516561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:03.059 [2024-10-14 16:44:07.516574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.516582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.516835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.516847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.516858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.516865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.517084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.517094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.517105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.517112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.517339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.517349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.517360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:03.060 [2024-10-14 16:44:07.517368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:03.060 passed 00:19:03.060 Test: blockdev nvme passthru rw ...passed 00:19:03.060 Test: blockdev nvme passthru vendor specific ...[2024-10-14 16:44:07.599041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.060 [2024-10-14 16:44:07.599059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.599168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.060 [2024-10-14 16:44:07.599178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.599274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.060 [2024-10-14 16:44:07.599285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:03.060 [2024-10-14 16:44:07.599390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:03.060 [2024-10-14 16:44:07.599400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:03.060 passed 00:19:03.060 Test: blockdev nvme admin passthru ...passed 00:19:03.060 Test: blockdev copy ...passed 00:19:03.060 00:19:03.060 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.060 suites 1 1 n/a 0 0 00:19:03.060 tests 23 23 23 0 0 00:19:03.060 asserts 152 152 152 0 n/a 00:19:03.060 00:19:03.060 Elapsed time = 1.138 seconds 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:03.319 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:03.319 rmmod nvme_tcp 00:19:03.580 rmmod nvme_fabrics 00:19:03.580 rmmod nvme_keyring 00:19:03.580 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:03.580 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 552992 ']' 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 552992 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 552992 ']' 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 552992 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 552992 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 552992' 00:19:03.580 killing process with pid 552992 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 552992 00:19:03.580 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 552992 00:19:03.838 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.839 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:06.372 00:19:06.372 real 0m10.345s 00:19:06.372 user 0m11.829s 00:19:06.372 sys 0m5.278s 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:06.372 ************************************ 00:19:06.372 END TEST nvmf_bdevio_no_huge 00:19:06.372 ************************************ 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.372 ************************************ 00:19:06.372 START TEST nvmf_tls 00:19:06.372 ************************************ 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:06.372 * Looking for test storage... 00:19:06.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.372 --rc genhtml_branch_coverage=1 00:19:06.372 --rc genhtml_function_coverage=1 00:19:06.372 --rc genhtml_legend=1 00:19:06.372 --rc geninfo_all_blocks=1 00:19:06.372 --rc geninfo_unexecuted_blocks=1 00:19:06.372 00:19:06.372 ' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.372 --rc genhtml_branch_coverage=1 00:19:06.372 --rc genhtml_function_coverage=1 00:19:06.372 --rc genhtml_legend=1 00:19:06.372 --rc geninfo_all_blocks=1 00:19:06.372 --rc geninfo_unexecuted_blocks=1 00:19:06.372 00:19:06.372 ' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.372 --rc genhtml_branch_coverage=1 00:19:06.372 --rc genhtml_function_coverage=1 00:19:06.372 --rc genhtml_legend=1 00:19:06.372 --rc geninfo_all_blocks=1 00:19:06.372 --rc geninfo_unexecuted_blocks=1 00:19:06.372 00:19:06.372 ' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.372 --rc genhtml_branch_coverage=1 00:19:06.372 --rc genhtml_function_coverage=1 00:19:06.372 --rc genhtml_legend=1 00:19:06.372 --rc geninfo_all_blocks=1 00:19:06.372 --rc geninfo_unexecuted_blocks=1 00:19:06.372 00:19:06.372 ' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.372 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:06.373 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.940 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.940 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:12.940 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:12.941 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:12.941 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:12.941 Found net devices under 0000:86:00.0: cvl_0_0 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:12.941 Found net devices under 0000:86:00.1: cvl_0_1 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:12.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:19:12.941 00:19:12.941 --- 10.0.0.2 ping statistics --- 00:19:12.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.941 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:19:12.941 00:19:12.941 --- 10.0.0.1 ping statistics --- 00:19:12.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.941 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:12.941 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=556880 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 556880 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 556880 ']' 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.942 [2024-10-14 16:44:16.747326] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:12.942 [2024-10-14 16:44:16.747376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.942 [2024-10-14 16:44:16.822843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.942 [2024-10-14 16:44:16.864012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.942 [2024-10-14 16:44:16.864044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.942 [2024-10-14 16:44:16.864051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.942 [2024-10-14 16:44:16.864057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.942 [2024-10-14 16:44:16.864062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.942 [2024-10-14 16:44:16.864609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:12.942 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:12.942 true 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:12.942 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:13.200 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:13.200 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:13.200 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:13.459 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:13.459 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:13.459 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:13.459 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:13.459 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:13.459 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:13.717 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:13.717 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:13.717 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:13.975 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:13.975 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:14.233 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:14.233 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:14.233 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:14.233 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:14.233 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.FsxdiK5evm 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.fCvrWKHn6S 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.FsxdiK5evm 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.fCvrWKHn6S 00:19:14.491 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:14.749 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:15.007 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.FsxdiK5evm 00:19:15.007 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.FsxdiK5evm 00:19:15.007 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:15.266 [2024-10-14 16:44:19.700945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.266 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:15.266 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:15.524 [2024-10-14 16:44:20.049858] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:15.524 [2024-10-14 16:44:20.050064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.524 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:15.782 malloc0 00:19:15.782 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:16.041 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.FsxdiK5evm 00:19:16.041 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.299 16:44:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FsxdiK5evm 00:19:28.500 Initializing NVMe Controllers 00:19:28.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:28.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:28.500 Initialization complete. Launching workers. 00:19:28.500 ======================================================== 00:19:28.500 Latency(us) 00:19:28.500 Device Information : IOPS MiB/s Average min max 00:19:28.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16821.38 65.71 3804.77 805.30 5022.41 00:19:28.500 ======================================================== 00:19:28.500 Total : 16821.38 65.71 3804.77 805.30 5022.41 00:19:28.500 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsxdiK5evm 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FsxdiK5evm 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=559230 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 559230 /var/tmp/bdevperf.sock 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 559230 ']' 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.500 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.500 [2024-10-14 16:44:30.983908] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:28.500 [2024-10-14 16:44:30.983956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid559230 ] 00:19:28.500 [2024-10-14 16:44:31.049238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.500 [2024-10-14 16:44:31.088716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.500 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.500 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.500 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsxdiK5evm 00:19:28.500 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.500 [2024-10-14 16:44:31.530833] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.500 TLSTESTn1 00:19:28.500 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:28.500 Running I/O for 10 seconds... 00:19:29.066 5474.00 IOPS, 21.38 MiB/s [2024-10-14T14:44:35.076Z] 5592.00 IOPS, 21.84 MiB/s [2024-10-14T14:44:36.010Z] 5573.00 IOPS, 21.77 MiB/s [2024-10-14T14:44:36.945Z] 5597.50 IOPS, 21.87 MiB/s [2024-10-14T14:44:37.880Z] 5613.80 IOPS, 21.93 MiB/s [2024-10-14T14:44:38.820Z] 5638.50 IOPS, 22.03 MiB/s [2024-10-14T14:44:39.757Z] 5643.86 IOPS, 22.05 MiB/s [2024-10-14T14:44:41.138Z] 5622.62 IOPS, 21.96 MiB/s [2024-10-14T14:44:41.829Z] 5614.00 IOPS, 21.93 MiB/s [2024-10-14T14:44:41.829Z] 5619.80 IOPS, 21.95 MiB/s 00:19:37.195 Latency(us) 00:19:37.195 [2024-10-14T14:44:41.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:37.195 Verification LBA range: start 0x0 length 0x2000 00:19:37.195 TLSTESTn1 : 10.01 5624.53 21.97 0.00 0.00 22724.08 5648.58 24591.60 00:19:37.195 [2024-10-14T14:44:41.829Z] =================================================================================================================== 00:19:37.195 [2024-10-14T14:44:41.829Z] Total : 5624.53 21.97 0.00 0.00 22724.08 5648.58 24591.60 00:19:37.195 { 00:19:37.195 "results": [ 00:19:37.195 { 00:19:37.195 "job": "TLSTESTn1", 00:19:37.195 "core_mask": "0x4", 00:19:37.195 "workload": "verify", 00:19:37.195 "status": "finished", 00:19:37.195 "verify_range": { 00:19:37.195 "start": 0, 00:19:37.195 "length": 8192 00:19:37.195 }, 00:19:37.195 "queue_depth": 128, 00:19:37.195 "io_size": 4096, 00:19:37.195 "runtime": 10.013807, 00:19:37.195 "iops": 5624.534205622297, 00:19:37.195 "mibps": 21.970836740712098, 00:19:37.195 "io_failed": 0, 00:19:37.195 "io_timeout": 0, 00:19:37.195 "avg_latency_us": 22724.07606642977, 00:19:37.195 "min_latency_us": 5648.579047619048, 00:19:37.195 "max_latency_us": 24591.60380952381 00:19:37.195 } 00:19:37.195 ], 00:19:37.195 "core_count": 1 00:19:37.195 } 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 559230 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 559230 ']' 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 559230 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 559230 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 559230' 00:19:37.195 killing process with pid 559230 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 559230 00:19:37.195 Received shutdown signal, test time was about 10.000000 seconds 00:19:37.195 00:19:37.195 Latency(us) 00:19:37.195 [2024-10-14T14:44:41.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.195 [2024-10-14T14:44:41.829Z] =================================================================================================================== 00:19:37.195 [2024-10-14T14:44:41.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.195 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 559230 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCvrWKHn6S 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCvrWKHn6S 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCvrWKHn6S 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fCvrWKHn6S 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=561066 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 561066 /var/tmp/bdevperf.sock 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 561066 ']' 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.460 16:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.460 [2024-10-14 16:44:42.011371] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:37.460 [2024-10-14 16:44:42.011420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561066 ] 00:19:37.460 [2024-10-14 16:44:42.077165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.719 [2024-10-14 16:44:42.114307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.719 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.719 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.719 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCvrWKHn6S 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:37.979 [2024-10-14 16:44:42.567866] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.979 [2024-10-14 16:44:42.575834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:37.979 [2024-10-14 16:44:42.576269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afe230 (107): Transport endpoint is not connected 00:19:37.979 [2024-10-14 16:44:42.577262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afe230 (9): Bad file descriptor 00:19:37.979 [2024-10-14 16:44:42.578263] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.979 [2024-10-14 16:44:42.578272] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:37.979 [2024-10-14 16:44:42.578279] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:37.979 [2024-10-14 16:44:42.578286] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.979 request: 00:19:37.979 { 00:19:37.979 "name": "TLSTEST", 00:19:37.979 "trtype": "tcp", 00:19:37.979 "traddr": "10.0.0.2", 00:19:37.979 "adrfam": "ipv4", 00:19:37.979 "trsvcid": "4420", 00:19:37.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.979 "prchk_reftag": false, 00:19:37.979 "prchk_guard": false, 00:19:37.979 "hdgst": false, 00:19:37.979 "ddgst": false, 00:19:37.979 "psk": "key0", 00:19:37.979 "allow_unrecognized_csi": false, 00:19:37.979 "method": "bdev_nvme_attach_controller", 00:19:37.979 "req_id": 1 00:19:37.979 } 00:19:37.979 Got JSON-RPC error response 00:19:37.979 response: 00:19:37.979 { 00:19:37.979 "code": -5, 00:19:37.979 "message": "Input/output error" 00:19:37.979 } 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 561066 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 561066 ']' 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 561066 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.979 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561066 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561066' 00:19:38.238 killing process with pid 561066 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 561066 00:19:38.238 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.238 00:19:38.238 Latency(us) 00:19:38.238 [2024-10-14T14:44:42.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.238 [2024-10-14T14:44:42.872Z] =================================================================================================================== 00:19:38.238 [2024-10-14T14:44:42.872Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 561066 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FsxdiK5evm 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FsxdiK5evm 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FsxdiK5evm 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FsxdiK5evm 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=561088 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 561088 /var/tmp/bdevperf.sock 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 561088 ']' 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.238 16:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.238 [2024-10-14 16:44:42.856217] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:38.238 [2024-10-14 16:44:42.856266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561088 ] 00:19:38.497 [2024-10-14 16:44:42.923835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.497 [2024-10-14 16:44:42.963565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.497 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.497 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:38.497 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsxdiK5evm 00:19:38.755 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:39.014 [2024-10-14 16:44:43.446576] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.014 [2024-10-14 16:44:43.457969] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:39.014 [2024-10-14 16:44:43.457992] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:39.014 [2024-10-14 16:44:43.458014] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:39.014 [2024-10-14 16:44:43.458955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1366230 (107): Transport endpoint is not connected 00:19:39.014 [2024-10-14 16:44:43.459949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1366230 (9): Bad file descriptor 00:19:39.014 [2024-10-14 16:44:43.460950] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:39.014 [2024-10-14 16:44:43.460959] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:39.014 [2024-10-14 16:44:43.460966] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:39.014 [2024-10-14 16:44:43.460974] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:39.014 request: 00:19:39.014 { 00:19:39.014 "name": "TLSTEST", 00:19:39.014 "trtype": "tcp", 00:19:39.014 "traddr": "10.0.0.2", 00:19:39.014 "adrfam": "ipv4", 00:19:39.014 "trsvcid": "4420", 00:19:39.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.014 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:39.014 "prchk_reftag": false, 00:19:39.014 "prchk_guard": false, 00:19:39.014 "hdgst": false, 00:19:39.014 "ddgst": false, 00:19:39.014 "psk": "key0", 00:19:39.014 "allow_unrecognized_csi": false, 00:19:39.014 "method": "bdev_nvme_attach_controller", 00:19:39.014 "req_id": 1 00:19:39.014 } 00:19:39.014 Got JSON-RPC error response 00:19:39.014 response: 00:19:39.014 { 00:19:39.014 "code": -5, 00:19:39.014 "message": "Input/output error" 00:19:39.014 } 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 561088 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 561088 ']' 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 561088 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561088 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561088' 00:19:39.014 killing process with pid 561088 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 561088 00:19:39.014 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.014 00:19:39.014 Latency(us) 00:19:39.014 [2024-10-14T14:44:43.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.014 [2024-10-14T14:44:43.648Z] =================================================================================================================== 00:19:39.014 [2024-10-14T14:44:43.648Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.014 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 561088 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsxdiK5evm 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsxdiK5evm 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FsxdiK5evm 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.FsxdiK5evm 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=561320 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 561320 /var/tmp/bdevperf.sock 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 561320 ']' 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.273 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.273 [2024-10-14 16:44:43.721629] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:39.273 [2024-10-14 16:44:43.721677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561320 ] 00:19:39.273 [2024-10-14 16:44:43.791478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.273 [2024-10-14 16:44:43.829372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.532 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.532 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:39.532 16:44:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FsxdiK5evm 00:19:39.532 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.792 [2024-10-14 16:44:44.278959] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.792 [2024-10-14 16:44:44.290096] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:39.792 [2024-10-14 16:44:44.290117] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:39.792 [2024-10-14 16:44:44.290138] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:39.792 [2024-10-14 16:44:44.290323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d7230 (107): Transport endpoint is not connected 00:19:39.792 [2024-10-14 16:44:44.291316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d7230 (9): Bad file descriptor 00:19:39.792 [2024-10-14 16:44:44.292318] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:39.792 [2024-10-14 16:44:44.292328] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:39.792 [2024-10-14 16:44:44.292335] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:39.792 [2024-10-14 16:44:44.292342] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:39.792 request: 00:19:39.792 { 00:19:39.792 "name": "TLSTEST", 00:19:39.792 "trtype": "tcp", 00:19:39.792 "traddr": "10.0.0.2", 00:19:39.792 "adrfam": "ipv4", 00:19:39.792 "trsvcid": "4420", 00:19:39.792 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:39.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.792 "prchk_reftag": false, 00:19:39.792 "prchk_guard": false, 00:19:39.792 "hdgst": false, 00:19:39.792 "ddgst": false, 00:19:39.792 "psk": "key0", 00:19:39.792 "allow_unrecognized_csi": false, 00:19:39.792 "method": "bdev_nvme_attach_controller", 00:19:39.792 "req_id": 1 00:19:39.792 } 00:19:39.792 Got JSON-RPC error response 00:19:39.792 response: 00:19:39.792 { 00:19:39.792 "code": -5, 00:19:39.792 "message": "Input/output error" 00:19:39.792 } 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 561320 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 561320 ']' 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 561320 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561320 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561320' 00:19:39.792 killing process with pid 561320 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 561320 00:19:39.792 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.792 00:19:39.792 Latency(us) 00:19:39.792 [2024-10-14T14:44:44.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.792 [2024-10-14T14:44:44.426Z] =================================================================================================================== 00:19:39.792 [2024-10-14T14:44:44.426Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.792 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 561320 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:40.050 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=561436 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 561436 /var/tmp/bdevperf.sock 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 561436 ']' 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.051 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.051 [2024-10-14 16:44:44.570850] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:40.051 [2024-10-14 16:44:44.570901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561436 ] 00:19:40.051 [2024-10-14 16:44:44.640361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.051 [2024-10-14 16:44:44.676738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.310 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.310 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:40.310 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:40.310 [2024-10-14 16:44:44.930353] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:40.310 [2024-10-14 16:44:44.930386] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:40.310 request: 00:19:40.310 { 00:19:40.310 "name": "key0", 00:19:40.310 "path": "", 00:19:40.310 "method": "keyring_file_add_key", 00:19:40.310 "req_id": 1 00:19:40.310 } 00:19:40.310 Got JSON-RPC error response 00:19:40.310 response: 00:19:40.310 { 00:19:40.310 "code": -1, 00:19:40.310 "message": "Operation not permitted" 00:19:40.310 } 00:19:40.568 16:44:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:40.568 [2024-10-14 16:44:45.122941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.568 [2024-10-14 16:44:45.122970] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:40.568 request: 00:19:40.568 { 00:19:40.568 "name": "TLSTEST", 00:19:40.568 "trtype": "tcp", 00:19:40.568 "traddr": "10.0.0.2", 00:19:40.568 "adrfam": "ipv4", 00:19:40.569 "trsvcid": "4420", 00:19:40.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.569 "prchk_reftag": false, 00:19:40.569 "prchk_guard": false, 00:19:40.569 "hdgst": false, 00:19:40.569 "ddgst": false, 00:19:40.569 "psk": "key0", 00:19:40.569 "allow_unrecognized_csi": false, 00:19:40.569 "method": "bdev_nvme_attach_controller", 00:19:40.569 "req_id": 1 00:19:40.569 } 00:19:40.569 Got JSON-RPC error response 00:19:40.569 response: 00:19:40.569 { 00:19:40.569 "code": -126, 00:19:40.569 "message": "Required key not available" 00:19:40.569 } 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 561436 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 561436 ']' 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 561436 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561436 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561436' 00:19:40.569 killing process with pid 561436 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 561436 00:19:40.569 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.569 00:19:40.569 Latency(us) 00:19:40.569 [2024-10-14T14:44:45.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.569 [2024-10-14T14:44:45.203Z] =================================================================================================================== 00:19:40.569 [2024-10-14T14:44:45.203Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:40.569 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 561436 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 556880 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 556880 ']' 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 556880 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 556880 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 556880' 00:19:40.828 killing process with pid 556880 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 556880 00:19:40.828 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 556880 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.5NrG24K1Hc 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.5NrG24K1Hc 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=561589 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 561589 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 561589 ']' 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.087 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.087 [2024-10-14 16:44:45.646349] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:41.087 [2024-10-14 16:44:45.646396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.087 [2024-10-14 16:44:45.719271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.346 [2024-10-14 16:44:45.759336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.346 [2024-10-14 16:44:45.759372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.346 [2024-10-14 16:44:45.759379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.346 [2024-10-14 16:44:45.759385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.346 [2024-10-14 16:44:45.759390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.346 [2024-10-14 16:44:45.759961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.5NrG24K1Hc 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5NrG24K1Hc 00:19:41.346 16:44:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.605 [2024-10-14 16:44:46.062401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.605 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:41.863 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:41.863 [2024-10-14 16:44:46.423331] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.863 [2024-10-14 16:44:46.423542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.863 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.122 malloc0 00:19:42.122 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.381 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:19:42.381 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5NrG24K1Hc 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5NrG24K1Hc 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=561839 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 561839 /var/tmp/bdevperf.sock 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 561839 ']' 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.641 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.641 [2024-10-14 16:44:47.159316] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:42.641 [2024-10-14 16:44:47.159363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid561839 ] 00:19:42.641 [2024-10-14 16:44:47.227356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.641 [2024-10-14 16:44:47.268385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.900 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.900 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:42.900 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:19:43.158 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.158 [2024-10-14 16:44:47.726806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.416 TLSTESTn1 00:19:43.416 16:44:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:43.416 Running I/O for 10 seconds... 00:19:45.289 5239.00 IOPS, 20.46 MiB/s [2024-10-14T14:44:51.298Z] 5454.00 IOPS, 21.30 MiB/s [2024-10-14T14:44:52.233Z] 5473.00 IOPS, 21.38 MiB/s [2024-10-14T14:44:53.169Z] 5395.50 IOPS, 21.08 MiB/s [2024-10-14T14:44:54.104Z] 5332.60 IOPS, 20.83 MiB/s [2024-10-14T14:44:55.040Z] 5303.33 IOPS, 20.72 MiB/s [2024-10-14T14:44:55.975Z] 5282.29 IOPS, 20.63 MiB/s [2024-10-14T14:44:57.352Z] 5297.12 IOPS, 20.69 MiB/s [2024-10-14T14:44:57.919Z] 5282.89 IOPS, 20.64 MiB/s [2024-10-14T14:44:58.188Z] 5288.90 IOPS, 20.66 MiB/s 00:19:53.554 Latency(us) 00:19:53.554 [2024-10-14T14:44:58.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.554 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.554 Verification LBA range: start 0x0 length 0x2000 00:19:53.554 TLSTESTn1 : 10.02 5292.72 20.67 0.00 0.00 24149.08 5867.03 30458.64 00:19:53.554 [2024-10-14T14:44:58.188Z] =================================================================================================================== 00:19:53.554 [2024-10-14T14:44:58.188Z] Total : 5292.72 20.67 0.00 0.00 24149.08 5867.03 30458.64 00:19:53.554 { 00:19:53.554 "results": [ 00:19:53.554 { 00:19:53.554 "job": "TLSTESTn1", 00:19:53.554 "core_mask": "0x4", 00:19:53.554 "workload": "verify", 00:19:53.554 "status": "finished", 00:19:53.554 "verify_range": { 00:19:53.554 "start": 0, 00:19:53.554 "length": 8192 00:19:53.554 }, 00:19:53.554 "queue_depth": 128, 00:19:53.554 "io_size": 4096, 00:19:53.554 "runtime": 10.016775, 00:19:53.554 "iops": 5292.7214597512675, 00:19:53.554 "mibps": 20.67469320215339, 00:19:53.554 "io_failed": 0, 00:19:53.554 "io_timeout": 0, 00:19:53.554 "avg_latency_us": 24149.07675756465, 00:19:53.554 "min_latency_us": 5867.032380952381, 00:19:53.555 "max_latency_us": 30458.63619047619 00:19:53.555 } 00:19:53.555 ], 00:19:53.555 "core_count": 1 00:19:53.555 } 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 561839 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 561839 ']' 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 561839 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:53.555 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561839 00:19:53.555 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:53.555 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:53.555 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561839' 00:19:53.555 killing process with pid 561839 00:19:53.560 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 561839 00:19:53.560 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.560 00:19:53.560 Latency(us) 00:19:53.560 [2024-10-14T14:44:58.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.560 [2024-10-14T14:44:58.194Z] =================================================================================================================== 00:19:53.560 [2024-10-14T14:44:58.194Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.560 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 561839 00:19:53.560 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.5NrG24K1Hc 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5NrG24K1Hc 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5NrG24K1Hc 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5NrG24K1Hc 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5NrG24K1Hc 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=563674 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 563674 /var/tmp/bdevperf.sock 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 563674 ']' 00:19:53.561 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.562 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.562 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.562 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.562 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.827 [2024-10-14 16:44:58.222455] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:53.827 [2024-10-14 16:44:58.222504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid563674 ] 00:19:53.827 [2024-10-14 16:44:58.288523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.827 [2024-10-14 16:44:58.325725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.827 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.827 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.827 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:19:54.085 [2024-10-14 16:44:58.599209] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5NrG24K1Hc': 0100666 00:19:54.085 [2024-10-14 16:44:58.599245] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:54.085 request: 00:19:54.085 { 00:19:54.085 "name": "key0", 00:19:54.085 "path": "/tmp/tmp.5NrG24K1Hc", 00:19:54.085 "method": "keyring_file_add_key", 00:19:54.085 "req_id": 1 00:19:54.085 } 00:19:54.085 Got JSON-RPC error response 00:19:54.085 response: 00:19:54.085 { 00:19:54.085 "code": -1, 00:19:54.085 "message": "Operation not permitted" 00:19:54.085 } 00:19:54.085 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.344 [2024-10-14 16:44:58.811839] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.344 [2024-10-14 16:44:58.811865] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:54.344 request: 00:19:54.344 { 00:19:54.344 "name": "TLSTEST", 00:19:54.344 "trtype": "tcp", 00:19:54.344 "traddr": "10.0.0.2", 00:19:54.344 "adrfam": "ipv4", 00:19:54.344 "trsvcid": "4420", 00:19:54.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.344 "prchk_reftag": false, 00:19:54.344 "prchk_guard": false, 00:19:54.344 "hdgst": false, 00:19:54.344 "ddgst": false, 00:19:54.344 "psk": "key0", 00:19:54.344 "allow_unrecognized_csi": false, 00:19:54.344 "method": "bdev_nvme_attach_controller", 00:19:54.344 "req_id": 1 00:19:54.344 } 00:19:54.344 Got JSON-RPC error response 00:19:54.344 response: 00:19:54.344 { 00:19:54.344 "code": -126, 00:19:54.344 "message": "Required key not available" 00:19:54.344 } 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 563674 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 563674 ']' 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 563674 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 563674 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 563674' 00:19:54.344 killing process with pid 563674 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 563674 00:19:54.344 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.344 00:19:54.344 Latency(us) 00:19:54.344 [2024-10-14T14:44:58.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.344 [2024-10-14T14:44:58.978Z] =================================================================================================================== 00:19:54.344 [2024-10-14T14:44:58.978Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:54.344 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 563674 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 561589 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 561589 ']' 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 561589 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 561589 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 561589' 00:19:54.603 killing process with pid 561589 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 561589 00:19:54.603 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 561589 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=563917 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 563917 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 563917 ']' 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.862 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.862 [2024-10-14 16:44:59.311607] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:54.862 [2024-10-14 16:44:59.311656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.862 [2024-10-14 16:44:59.381311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.862 [2024-10-14 16:44:59.416528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.862 [2024-10-14 16:44:59.416561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.862 [2024-10-14 16:44:59.416568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.862 [2024-10-14 16:44:59.416575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.862 [2024-10-14 16:44:59.416580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.862 [2024-10-14 16:44:59.417177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.5NrG24K1Hc 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5NrG24K1Hc 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.5NrG24K1Hc 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5NrG24K1Hc 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.121 [2024-10-14 16:44:59.723214] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.121 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:55.379 16:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:55.637 [2024-10-14 16:45:00.132272] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:55.637 [2024-10-14 16:45:00.132486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.637 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:55.895 malloc0 00:19:55.895 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:56.153 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:19:56.153 [2024-10-14 16:45:00.757946] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5NrG24K1Hc': 0100666 00:19:56.153 [2024-10-14 16:45:00.757976] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:56.153 request: 00:19:56.153 { 00:19:56.153 "name": "key0", 00:19:56.153 "path": "/tmp/tmp.5NrG24K1Hc", 00:19:56.153 "method": "keyring_file_add_key", 00:19:56.153 "req_id": 1 00:19:56.153 } 00:19:56.153 Got JSON-RPC error response 00:19:56.153 response: 00:19:56.153 { 00:19:56.153 "code": -1, 00:19:56.153 "message": "Operation not permitted" 00:19:56.153 } 00:19:56.153 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.411 [2024-10-14 16:45:00.958491] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:56.411 [2024-10-14 16:45:00.958528] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:56.411 request: 00:19:56.411 { 00:19:56.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.411 "host": "nqn.2016-06.io.spdk:host1", 00:19:56.411 "psk": "key0", 00:19:56.411 "method": "nvmf_subsystem_add_host", 00:19:56.411 "req_id": 1 00:19:56.411 } 00:19:56.411 Got JSON-RPC error response 00:19:56.411 response: 00:19:56.411 { 00:19:56.411 "code": -32603, 00:19:56.411 "message": "Internal error" 00:19:56.411 } 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 563917 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 563917 ']' 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 563917 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.411 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 563917 00:19:56.411 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:56.411 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:56.411 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 563917' 00:19:56.411 killing process with pid 563917 00:19:56.411 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 563917 00:19:56.411 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 563917 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.5NrG24K1Hc 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=564254 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 564254 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 564254 ']' 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.670 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.670 [2024-10-14 16:45:01.262368] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:56.670 [2024-10-14 16:45:01.262419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.929 [2024-10-14 16:45:01.334864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.929 [2024-10-14 16:45:01.374791] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.929 [2024-10-14 16:45:01.374830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.929 [2024-10-14 16:45:01.374837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.929 [2024-10-14 16:45:01.374843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.929 [2024-10-14 16:45:01.374848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.929 [2024-10-14 16:45:01.375414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.5NrG24K1Hc 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5NrG24K1Hc 00:19:56.929 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:57.188 [2024-10-14 16:45:01.690574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.188 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:57.446 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:57.705 [2024-10-14 16:45:02.091593] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.705 [2024-10-14 16:45:02.091798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.705 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.705 malloc0 00:19:57.705 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.963 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:19:58.221 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=564646 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 564646 /var/tmp/bdevperf.sock 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 564646 ']' 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.479 16:45:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.479 [2024-10-14 16:45:02.980700] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:19:58.479 [2024-10-14 16:45:02.980751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid564646 ] 00:19:58.479 [2024-10-14 16:45:03.048427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.479 [2024-10-14 16:45:03.088161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.738 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.738 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:58.738 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:19:58.738 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:58.997 [2024-10-14 16:45:03.542523] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.997 TLSTESTn1 00:19:59.256 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:59.515 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:59.515 "subsystems": [ 00:19:59.515 { 00:19:59.515 "subsystem": "keyring", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "keyring_file_add_key", 00:19:59.515 "params": { 00:19:59.515 "name": "key0", 00:19:59.515 "path": "/tmp/tmp.5NrG24K1Hc" 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "iobuf", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "iobuf_set_options", 00:19:59.515 "params": { 00:19:59.515 "small_pool_count": 8192, 00:19:59.515 "large_pool_count": 1024, 00:19:59.515 "small_bufsize": 8192, 00:19:59.515 "large_bufsize": 135168 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "sock", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "sock_set_default_impl", 00:19:59.515 "params": { 00:19:59.515 "impl_name": "posix" 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "sock_impl_set_options", 00:19:59.515 "params": { 00:19:59.515 "impl_name": "ssl", 00:19:59.515 "recv_buf_size": 4096, 00:19:59.515 "send_buf_size": 4096, 00:19:59.515 "enable_recv_pipe": true, 00:19:59.515 "enable_quickack": false, 00:19:59.515 "enable_placement_id": 0, 00:19:59.515 "enable_zerocopy_send_server": true, 00:19:59.515 "enable_zerocopy_send_client": false, 00:19:59.515 "zerocopy_threshold": 0, 00:19:59.515 "tls_version": 0, 00:19:59.515 "enable_ktls": false 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "sock_impl_set_options", 00:19:59.515 "params": { 00:19:59.515 "impl_name": "posix", 00:19:59.515 "recv_buf_size": 2097152, 00:19:59.515 "send_buf_size": 2097152, 00:19:59.515 "enable_recv_pipe": true, 00:19:59.515 "enable_quickack": false, 00:19:59.515 "enable_placement_id": 0, 00:19:59.515 "enable_zerocopy_send_server": true, 00:19:59.515 "enable_zerocopy_send_client": false, 00:19:59.515 "zerocopy_threshold": 0, 00:19:59.515 "tls_version": 0, 00:19:59.515 "enable_ktls": false 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "vmd", 00:19:59.515 "config": [] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "accel", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "accel_set_options", 00:19:59.515 "params": { 00:19:59.515 "small_cache_size": 128, 00:19:59.515 "large_cache_size": 16, 00:19:59.515 "task_count": 2048, 00:19:59.515 "sequence_count": 2048, 00:19:59.515 "buf_count": 2048 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "bdev", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "bdev_set_options", 00:19:59.515 "params": { 00:19:59.515 "bdev_io_pool_size": 65535, 00:19:59.515 "bdev_io_cache_size": 256, 00:19:59.515 "bdev_auto_examine": true, 00:19:59.515 "iobuf_small_cache_size": 128, 00:19:59.515 "iobuf_large_cache_size": 16 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "bdev_raid_set_options", 00:19:59.515 "params": { 00:19:59.515 "process_window_size_kb": 1024, 00:19:59.515 "process_max_bandwidth_mb_sec": 0 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "bdev_iscsi_set_options", 00:19:59.515 "params": { 00:19:59.515 "timeout_sec": 30 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "bdev_nvme_set_options", 00:19:59.515 "params": { 00:19:59.515 "action_on_timeout": "none", 00:19:59.515 "timeout_us": 0, 00:19:59.515 "timeout_admin_us": 0, 00:19:59.515 "keep_alive_timeout_ms": 10000, 00:19:59.515 "arbitration_burst": 0, 00:19:59.515 "low_priority_weight": 0, 00:19:59.515 "medium_priority_weight": 0, 00:19:59.515 "high_priority_weight": 0, 00:19:59.515 "nvme_adminq_poll_period_us": 10000, 00:19:59.515 "nvme_ioq_poll_period_us": 0, 00:19:59.515 "io_queue_requests": 0, 00:19:59.515 "delay_cmd_submit": true, 00:19:59.515 "transport_retry_count": 4, 00:19:59.515 "bdev_retry_count": 3, 00:19:59.515 "transport_ack_timeout": 0, 00:19:59.515 "ctrlr_loss_timeout_sec": 0, 00:19:59.515 "reconnect_delay_sec": 0, 00:19:59.515 "fast_io_fail_timeout_sec": 0, 00:19:59.515 "disable_auto_failback": false, 00:19:59.515 "generate_uuids": false, 00:19:59.515 "transport_tos": 0, 00:19:59.515 "nvme_error_stat": false, 00:19:59.515 "rdma_srq_size": 0, 00:19:59.515 "io_path_stat": false, 00:19:59.515 "allow_accel_sequence": false, 00:19:59.515 "rdma_max_cq_size": 0, 00:19:59.515 "rdma_cm_event_timeout_ms": 0, 00:19:59.515 "dhchap_digests": [ 00:19:59.515 "sha256", 00:19:59.515 "sha384", 00:19:59.515 "sha512" 00:19:59.515 ], 00:19:59.515 "dhchap_dhgroups": [ 00:19:59.515 "null", 00:19:59.515 "ffdhe2048", 00:19:59.515 "ffdhe3072", 00:19:59.515 "ffdhe4096", 00:19:59.515 "ffdhe6144", 00:19:59.515 "ffdhe8192" 00:19:59.515 ] 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "bdev_nvme_set_hotplug", 00:19:59.515 "params": { 00:19:59.515 "period_us": 100000, 00:19:59.515 "enable": false 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "bdev_malloc_create", 00:19:59.515 "params": { 00:19:59.515 "name": "malloc0", 00:19:59.515 "num_blocks": 8192, 00:19:59.515 "block_size": 4096, 00:19:59.515 "physical_block_size": 4096, 00:19:59.515 "uuid": "72627fec-2549-4a76-bb32-9f12afa41812", 00:19:59.515 "optimal_io_boundary": 0, 00:19:59.515 "md_size": 0, 00:19:59.515 "dif_type": 0, 00:19:59.515 "dif_is_head_of_md": false, 00:19:59.515 "dif_pi_format": 0 00:19:59.515 } 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "method": "bdev_wait_for_examine" 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "nbd", 00:19:59.515 "config": [] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "scheduler", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "framework_set_scheduler", 00:19:59.515 "params": { 00:19:59.515 "name": "static" 00:19:59.515 } 00:19:59.515 } 00:19:59.515 ] 00:19:59.515 }, 00:19:59.515 { 00:19:59.515 "subsystem": "nvmf", 00:19:59.515 "config": [ 00:19:59.515 { 00:19:59.515 "method": "nvmf_set_config", 00:19:59.515 "params": { 00:19:59.515 "discovery_filter": "match_any", 00:19:59.516 "admin_cmd_passthru": { 00:19:59.516 "identify_ctrlr": false 00:19:59.516 }, 00:19:59.516 "dhchap_digests": [ 00:19:59.516 "sha256", 00:19:59.516 "sha384", 00:19:59.516 "sha512" 00:19:59.516 ], 00:19:59.516 "dhchap_dhgroups": [ 00:19:59.516 "null", 00:19:59.516 "ffdhe2048", 00:19:59.516 "ffdhe3072", 00:19:59.516 "ffdhe4096", 00:19:59.516 "ffdhe6144", 00:19:59.516 "ffdhe8192" 00:19:59.516 ] 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_set_max_subsystems", 00:19:59.516 "params": { 00:19:59.516 "max_subsystems": 1024 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_set_crdt", 00:19:59.516 "params": { 00:19:59.516 "crdt1": 0, 00:19:59.516 "crdt2": 0, 00:19:59.516 "crdt3": 0 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_create_transport", 00:19:59.516 "params": { 00:19:59.516 "trtype": "TCP", 00:19:59.516 "max_queue_depth": 128, 00:19:59.516 "max_io_qpairs_per_ctrlr": 127, 00:19:59.516 "in_capsule_data_size": 4096, 00:19:59.516 "max_io_size": 131072, 00:19:59.516 "io_unit_size": 131072, 00:19:59.516 "max_aq_depth": 128, 00:19:59.516 "num_shared_buffers": 511, 00:19:59.516 "buf_cache_size": 4294967295, 00:19:59.516 "dif_insert_or_strip": false, 00:19:59.516 "zcopy": false, 00:19:59.516 "c2h_success": false, 00:19:59.516 "sock_priority": 0, 00:19:59.516 "abort_timeout_sec": 1, 00:19:59.516 "ack_timeout": 0, 00:19:59.516 "data_wr_pool_size": 0 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_create_subsystem", 00:19:59.516 "params": { 00:19:59.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.516 "allow_any_host": false, 00:19:59.516 "serial_number": "SPDK00000000000001", 00:19:59.516 "model_number": "SPDK bdev Controller", 00:19:59.516 "max_namespaces": 10, 00:19:59.516 "min_cntlid": 1, 00:19:59.516 "max_cntlid": 65519, 00:19:59.516 "ana_reporting": false 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_subsystem_add_host", 00:19:59.516 "params": { 00:19:59.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.516 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.516 "psk": "key0" 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_subsystem_add_ns", 00:19:59.516 "params": { 00:19:59.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.516 "namespace": { 00:19:59.516 "nsid": 1, 00:19:59.516 "bdev_name": "malloc0", 00:19:59.516 "nguid": "72627FEC25494A76BB329F12AFA41812", 00:19:59.516 "uuid": "72627fec-2549-4a76-bb32-9f12afa41812", 00:19:59.516 "no_auto_visible": false 00:19:59.516 } 00:19:59.516 } 00:19:59.516 }, 00:19:59.516 { 00:19:59.516 "method": "nvmf_subsystem_add_listener", 00:19:59.516 "params": { 00:19:59.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.516 "listen_address": { 00:19:59.516 "trtype": "TCP", 00:19:59.516 "adrfam": "IPv4", 00:19:59.516 "traddr": "10.0.0.2", 00:19:59.516 "trsvcid": "4420" 00:19:59.516 }, 00:19:59.516 "secure_channel": true 00:19:59.516 } 00:19:59.516 } 00:19:59.516 ] 00:19:59.516 } 00:19:59.516 ] 00:19:59.516 }' 00:19:59.516 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:59.775 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:59.775 "subsystems": [ 00:19:59.775 { 00:19:59.775 "subsystem": "keyring", 00:19:59.775 "config": [ 00:19:59.775 { 00:19:59.775 "method": "keyring_file_add_key", 00:19:59.775 "params": { 00:19:59.775 "name": "key0", 00:19:59.775 "path": "/tmp/tmp.5NrG24K1Hc" 00:19:59.775 } 00:19:59.775 } 00:19:59.775 ] 00:19:59.775 }, 00:19:59.775 { 00:19:59.775 "subsystem": "iobuf", 00:19:59.775 "config": [ 00:19:59.775 { 00:19:59.775 "method": "iobuf_set_options", 00:19:59.775 "params": { 00:19:59.775 "small_pool_count": 8192, 00:19:59.775 "large_pool_count": 1024, 00:19:59.775 "small_bufsize": 8192, 00:19:59.775 "large_bufsize": 135168 00:19:59.775 } 00:19:59.775 } 00:19:59.775 ] 00:19:59.775 }, 00:19:59.775 { 00:19:59.775 "subsystem": "sock", 00:19:59.775 "config": [ 00:19:59.775 { 00:19:59.775 "method": "sock_set_default_impl", 00:19:59.775 "params": { 00:19:59.775 "impl_name": "posix" 00:19:59.775 } 00:19:59.775 }, 00:19:59.775 { 00:19:59.775 "method": "sock_impl_set_options", 00:19:59.775 "params": { 00:19:59.775 "impl_name": "ssl", 00:19:59.775 "recv_buf_size": 4096, 00:19:59.775 "send_buf_size": 4096, 00:19:59.775 "enable_recv_pipe": true, 00:19:59.775 "enable_quickack": false, 00:19:59.775 "enable_placement_id": 0, 00:19:59.775 "enable_zerocopy_send_server": true, 00:19:59.775 "enable_zerocopy_send_client": false, 00:19:59.775 "zerocopy_threshold": 0, 00:19:59.775 "tls_version": 0, 00:19:59.775 "enable_ktls": false 00:19:59.775 } 00:19:59.775 }, 00:19:59.775 { 00:19:59.775 "method": "sock_impl_set_options", 00:19:59.775 "params": { 00:19:59.775 "impl_name": "posix", 00:19:59.775 "recv_buf_size": 2097152, 00:19:59.776 "send_buf_size": 2097152, 00:19:59.776 "enable_recv_pipe": true, 00:19:59.776 "enable_quickack": false, 00:19:59.776 "enable_placement_id": 0, 00:19:59.776 "enable_zerocopy_send_server": true, 00:19:59.776 "enable_zerocopy_send_client": false, 00:19:59.776 "zerocopy_threshold": 0, 00:19:59.776 "tls_version": 0, 00:19:59.776 "enable_ktls": false 00:19:59.776 } 00:19:59.776 } 00:19:59.776 ] 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "subsystem": "vmd", 00:19:59.776 "config": [] 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "subsystem": "accel", 00:19:59.776 "config": [ 00:19:59.776 { 00:19:59.776 "method": "accel_set_options", 00:19:59.776 "params": { 00:19:59.776 "small_cache_size": 128, 00:19:59.776 "large_cache_size": 16, 00:19:59.776 "task_count": 2048, 00:19:59.776 "sequence_count": 2048, 00:19:59.776 "buf_count": 2048 00:19:59.776 } 00:19:59.776 } 00:19:59.776 ] 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "subsystem": "bdev", 00:19:59.776 "config": [ 00:19:59.776 { 00:19:59.776 "method": "bdev_set_options", 00:19:59.776 "params": { 00:19:59.776 "bdev_io_pool_size": 65535, 00:19:59.776 "bdev_io_cache_size": 256, 00:19:59.776 "bdev_auto_examine": true, 00:19:59.776 "iobuf_small_cache_size": 128, 00:19:59.776 "iobuf_large_cache_size": 16 00:19:59.776 } 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "method": "bdev_raid_set_options", 00:19:59.776 "params": { 00:19:59.776 "process_window_size_kb": 1024, 00:19:59.776 "process_max_bandwidth_mb_sec": 0 00:19:59.776 } 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "method": "bdev_iscsi_set_options", 00:19:59.776 "params": { 00:19:59.776 "timeout_sec": 30 00:19:59.776 } 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "method": "bdev_nvme_set_options", 00:19:59.776 "params": { 00:19:59.776 "action_on_timeout": "none", 00:19:59.776 "timeout_us": 0, 00:19:59.776 "timeout_admin_us": 0, 00:19:59.776 "keep_alive_timeout_ms": 10000, 00:19:59.776 "arbitration_burst": 0, 00:19:59.776 "low_priority_weight": 0, 00:19:59.776 "medium_priority_weight": 0, 00:19:59.776 "high_priority_weight": 0, 00:19:59.776 "nvme_adminq_poll_period_us": 10000, 00:19:59.776 "nvme_ioq_poll_period_us": 0, 00:19:59.776 "io_queue_requests": 512, 00:19:59.776 "delay_cmd_submit": true, 00:19:59.776 "transport_retry_count": 4, 00:19:59.776 "bdev_retry_count": 3, 00:19:59.776 "transport_ack_timeout": 0, 00:19:59.776 "ctrlr_loss_timeout_sec": 0, 00:19:59.776 "reconnect_delay_sec": 0, 00:19:59.776 "fast_io_fail_timeout_sec": 0, 00:19:59.776 "disable_auto_failback": false, 00:19:59.776 "generate_uuids": false, 00:19:59.776 "transport_tos": 0, 00:19:59.776 "nvme_error_stat": false, 00:19:59.776 "rdma_srq_size": 0, 00:19:59.776 "io_path_stat": false, 00:19:59.776 "allow_accel_sequence": false, 00:19:59.776 "rdma_max_cq_size": 0, 00:19:59.776 "rdma_cm_event_timeout_ms": 0, 00:19:59.776 "dhchap_digests": [ 00:19:59.776 "sha256", 00:19:59.776 "sha384", 00:19:59.776 "sha512" 00:19:59.776 ], 00:19:59.776 "dhchap_dhgroups": [ 00:19:59.776 "null", 00:19:59.776 "ffdhe2048", 00:19:59.776 "ffdhe3072", 00:19:59.776 "ffdhe4096", 00:19:59.776 "ffdhe6144", 00:19:59.776 "ffdhe8192" 00:19:59.776 ] 00:19:59.776 } 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "method": "bdev_nvme_attach_controller", 00:19:59.776 "params": { 00:19:59.776 "name": "TLSTEST", 00:19:59.776 "trtype": "TCP", 00:19:59.776 "adrfam": "IPv4", 00:19:59.776 "traddr": "10.0.0.2", 00:19:59.776 "trsvcid": "4420", 00:19:59.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.776 "prchk_reftag": false, 00:19:59.776 "prchk_guard": false, 00:19:59.776 "ctrlr_loss_timeout_sec": 0, 00:19:59.776 "reconnect_delay_sec": 0, 00:19:59.776 "fast_io_fail_timeout_sec": 0, 00:19:59.776 "psk": "key0", 00:19:59.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.776 "hdgst": false, 00:19:59.776 "ddgst": false, 00:19:59.776 "multipath": "multipath" 00:19:59.776 } 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "method": "bdev_nvme_set_hotplug", 00:19:59.776 "params": { 00:19:59.776 "period_us": 100000, 00:19:59.776 "enable": false 00:19:59.776 } 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "method": "bdev_wait_for_examine" 00:19:59.776 } 00:19:59.776 ] 00:19:59.776 }, 00:19:59.776 { 00:19:59.776 "subsystem": "nbd", 00:19:59.776 "config": [] 00:19:59.776 } 00:19:59.776 ] 00:19:59.776 }' 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 564646 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 564646 ']' 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 564646 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 564646 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 564646' 00:19:59.776 killing process with pid 564646 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 564646 00:19:59.776 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.776 00:19:59.776 Latency(us) 00:19:59.776 [2024-10-14T14:45:04.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.776 [2024-10-14T14:45:04.410Z] =================================================================================================================== 00:19:59.776 [2024-10-14T14:45:04.410Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 564646 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 564254 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 564254 ']' 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 564254 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.776 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 564254 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 564254' 00:20:00.035 killing process with pid 564254 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 564254 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 564254 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.035 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:00.035 "subsystems": [ 00:20:00.035 { 00:20:00.035 "subsystem": "keyring", 00:20:00.035 "config": [ 00:20:00.035 { 00:20:00.035 "method": "keyring_file_add_key", 00:20:00.035 "params": { 00:20:00.035 "name": "key0", 00:20:00.035 "path": "/tmp/tmp.5NrG24K1Hc" 00:20:00.035 } 00:20:00.035 } 00:20:00.035 ] 00:20:00.035 }, 00:20:00.035 { 00:20:00.035 "subsystem": "iobuf", 00:20:00.035 "config": [ 00:20:00.035 { 00:20:00.035 "method": "iobuf_set_options", 00:20:00.035 "params": { 00:20:00.035 "small_pool_count": 8192, 00:20:00.035 "large_pool_count": 1024, 00:20:00.036 "small_bufsize": 8192, 00:20:00.036 "large_bufsize": 135168 00:20:00.036 } 00:20:00.036 } 00:20:00.036 ] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "sock", 00:20:00.036 "config": [ 00:20:00.036 { 00:20:00.036 "method": "sock_set_default_impl", 00:20:00.036 "params": { 00:20:00.036 "impl_name": "posix" 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "sock_impl_set_options", 00:20:00.036 "params": { 00:20:00.036 "impl_name": "ssl", 00:20:00.036 "recv_buf_size": 4096, 00:20:00.036 "send_buf_size": 4096, 00:20:00.036 "enable_recv_pipe": true, 00:20:00.036 "enable_quickack": false, 00:20:00.036 "enable_placement_id": 0, 00:20:00.036 "enable_zerocopy_send_server": true, 00:20:00.036 "enable_zerocopy_send_client": false, 00:20:00.036 "zerocopy_threshold": 0, 00:20:00.036 "tls_version": 0, 00:20:00.036 "enable_ktls": false 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "sock_impl_set_options", 00:20:00.036 "params": { 00:20:00.036 "impl_name": "posix", 00:20:00.036 "recv_buf_size": 2097152, 00:20:00.036 "send_buf_size": 2097152, 00:20:00.036 "enable_recv_pipe": true, 00:20:00.036 "enable_quickack": false, 00:20:00.036 "enable_placement_id": 0, 00:20:00.036 "enable_zerocopy_send_server": true, 00:20:00.036 "enable_zerocopy_send_client": false, 00:20:00.036 "zerocopy_threshold": 0, 00:20:00.036 "tls_version": 0, 00:20:00.036 "enable_ktls": false 00:20:00.036 } 00:20:00.036 } 00:20:00.036 ] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "vmd", 00:20:00.036 "config": [] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "accel", 00:20:00.036 "config": [ 00:20:00.036 { 00:20:00.036 "method": "accel_set_options", 00:20:00.036 "params": { 00:20:00.036 "small_cache_size": 128, 00:20:00.036 "large_cache_size": 16, 00:20:00.036 "task_count": 2048, 00:20:00.036 "sequence_count": 2048, 00:20:00.036 "buf_count": 2048 00:20:00.036 } 00:20:00.036 } 00:20:00.036 ] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "bdev", 00:20:00.036 "config": [ 00:20:00.036 { 00:20:00.036 "method": "bdev_set_options", 00:20:00.036 "params": { 00:20:00.036 "bdev_io_pool_size": 65535, 00:20:00.036 "bdev_io_cache_size": 256, 00:20:00.036 "bdev_auto_examine": true, 00:20:00.036 "iobuf_small_cache_size": 128, 00:20:00.036 "iobuf_large_cache_size": 16 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "bdev_raid_set_options", 00:20:00.036 "params": { 00:20:00.036 "process_window_size_kb": 1024, 00:20:00.036 "process_max_bandwidth_mb_sec": 0 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "bdev_iscsi_set_options", 00:20:00.036 "params": { 00:20:00.036 "timeout_sec": 30 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "bdev_nvme_set_options", 00:20:00.036 "params": { 00:20:00.036 "action_on_timeout": "none", 00:20:00.036 "timeout_us": 0, 00:20:00.036 "timeout_admin_us": 0, 00:20:00.036 "keep_alive_timeout_ms": 10000, 00:20:00.036 "arbitration_burst": 0, 00:20:00.036 "low_priority_weight": 0, 00:20:00.036 "medium_priority_weight": 0, 00:20:00.036 "high_priority_weight": 0, 00:20:00.036 "nvme_adminq_poll_period_us": 10000, 00:20:00.036 "nvme_ioq_poll_period_us": 0, 00:20:00.036 "io_queue_requests": 0, 00:20:00.036 "delay_cmd_submit": true, 00:20:00.036 "transport_retry_count": 4, 00:20:00.036 "bdev_retry_count": 3, 00:20:00.036 "transport_ack_timeout": 0, 00:20:00.036 "ctrlr_loss_timeout_sec": 0, 00:20:00.036 "reconnect_delay_sec": 0, 00:20:00.036 "fast_io_fail_timeout_sec": 0, 00:20:00.036 "disable_auto_failback": false, 00:20:00.036 "generate_uuids": false, 00:20:00.036 "transport_tos": 0, 00:20:00.036 "nvme_error_stat": false, 00:20:00.036 "rdma_srq_size": 0, 00:20:00.036 "io_path_stat": false, 00:20:00.036 "allow_accel_sequence": false, 00:20:00.036 "rdma_max_cq_size": 0, 00:20:00.036 "rdma_cm_event_timeout_ms": 0, 00:20:00.036 "dhchap_digests": [ 00:20:00.036 "sha256", 00:20:00.036 "sha384", 00:20:00.036 "sha512" 00:20:00.036 ], 00:20:00.036 "dhchap_dhgroups": [ 00:20:00.036 "null", 00:20:00.036 "ffdhe2048", 00:20:00.036 "ffdhe3072", 00:20:00.036 "ffdhe4096", 00:20:00.036 "ffdhe6144", 00:20:00.036 "ffdhe8192" 00:20:00.036 ] 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "bdev_nvme_set_hotplug", 00:20:00.036 "params": { 00:20:00.036 "period_us": 100000, 00:20:00.036 "enable": false 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "bdev_malloc_create", 00:20:00.036 "params": { 00:20:00.036 "name": "malloc0", 00:20:00.036 "num_blocks": 8192, 00:20:00.036 "block_size": 4096, 00:20:00.036 "physical_block_size": 4096, 00:20:00.036 "uuid": "72627fec-2549-4a76-bb32-9f12afa41812", 00:20:00.036 "optimal_io_boundary": 0, 00:20:00.036 "md_size": 0, 00:20:00.036 "dif_type": 0, 00:20:00.036 "dif_is_head_of_md": false, 00:20:00.036 "dif_pi_format": 0 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "bdev_wait_for_examine" 00:20:00.036 } 00:20:00.036 ] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "nbd", 00:20:00.036 "config": [] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "scheduler", 00:20:00.036 "config": [ 00:20:00.036 { 00:20:00.036 "method": "framework_set_scheduler", 00:20:00.036 "params": { 00:20:00.036 "name": "static" 00:20:00.036 } 00:20:00.036 } 00:20:00.036 ] 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "subsystem": "nvmf", 00:20:00.036 "config": [ 00:20:00.036 { 00:20:00.036 "method": "nvmf_set_config", 00:20:00.036 "params": { 00:20:00.036 "discovery_filter": "match_any", 00:20:00.036 "admin_cmd_passthru": { 00:20:00.036 "identify_ctrlr": false 00:20:00.036 }, 00:20:00.036 "dhchap_digests": [ 00:20:00.036 "sha256", 00:20:00.036 "sha384", 00:20:00.036 "sha512" 00:20:00.036 ], 00:20:00.036 "dhchap_dhgroups": [ 00:20:00.036 "null", 00:20:00.036 "ffdhe2048", 00:20:00.036 "ffdhe3072", 00:20:00.036 "ffdhe4096", 00:20:00.036 "ffdhe6144", 00:20:00.036 "ffdhe8192" 00:20:00.036 ] 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "nvmf_set_max_subsystems", 00:20:00.036 "params": { 00:20:00.036 "max_subsystems": 1024 00:20:00.036 } 00:20:00.036 }, 00:20:00.036 { 00:20:00.036 "method": "nvmf_set_crdt", 00:20:00.036 "params": { 00:20:00.036 "crdt1": 0, 00:20:00.036 "crdt2": 0, 00:20:00.037 "crdt3": 0 00:20:00.037 } 00:20:00.037 }, 00:20:00.037 { 00:20:00.037 "method": "nvmf_create_transport", 00:20:00.037 "params": { 00:20:00.037 "trtype": "TCP", 00:20:00.037 "max_queue_depth": 128, 00:20:00.037 "max_io_qpairs_per_ctrlr": 127, 00:20:00.037 "in_capsule_data_size": 4096, 00:20:00.037 "max_io_size": 131072, 00:20:00.037 "io_unit_size": 131072, 00:20:00.037 "max_aq_depth": 128, 00:20:00.037 "num_shared_buffers": 511, 00:20:00.037 "buf_cache_size": 4294967295, 00:20:00.037 "dif_insert_or_strip": false, 00:20:00.037 "zcopy": false, 00:20:00.037 "c2h_success": false, 00:20:00.037 "sock_priority": 0, 00:20:00.037 "abort_timeout_sec": 1, 00:20:00.037 "ack_timeout": 0, 00:20:00.037 "data_wr_pool_size": 0 00:20:00.037 } 00:20:00.037 }, 00:20:00.037 { 00:20:00.037 "method": "nvmf_create_subsystem", 00:20:00.037 "params": { 00:20:00.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.037 "allow_any_host": false, 00:20:00.037 "serial_number": "SPDK00000000000001", 00:20:00.037 "model_number": "SPDK bdev Controller", 00:20:00.037 "max_namespaces": 10, 00:20:00.037 "min_cntlid": 1, 00:20:00.037 "max_cntlid": 65519, 00:20:00.037 "ana_reporting": false 00:20:00.037 } 00:20:00.037 }, 00:20:00.037 { 00:20:00.037 "method": "nvmf_subsystem_add_host", 00:20:00.037 "params": { 00:20:00.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.037 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.037 "psk": "key0" 00:20:00.037 } 00:20:00.037 }, 00:20:00.037 { 00:20:00.037 "method": "nvmf_subsystem_add_ns", 00:20:00.037 "params": { 00:20:00.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.037 "namespace": { 00:20:00.037 "nsid": 1, 00:20:00.037 "bdev_name": "malloc0", 00:20:00.037 "nguid": "72627FEC25494A76BB329F12AFA41812", 00:20:00.037 "uuid": "72627fec-2549-4a76-bb32-9f12afa41812", 00:20:00.037 "no_auto_visible": false 00:20:00.037 } 00:20:00.037 } 00:20:00.037 }, 00:20:00.037 { 00:20:00.037 "method": "nvmf_subsystem_add_listener", 00:20:00.037 "params": { 00:20:00.037 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.037 "listen_address": { 00:20:00.037 "trtype": "TCP", 00:20:00.037 "adrfam": "IPv4", 00:20:00.037 "traddr": "10.0.0.2", 00:20:00.037 "trsvcid": "4420" 00:20:00.037 }, 00:20:00.037 "secure_channel": true 00:20:00.037 } 00:20:00.037 } 00:20:00.037 ] 00:20:00.037 } 00:20:00.037 ] 00:20:00.037 }' 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=565030 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 565030 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 565030 ']' 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.037 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.037 [2024-10-14 16:45:04.665983] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:00.037 [2024-10-14 16:45:04.666033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.295 [2024-10-14 16:45:04.731467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.295 [2024-10-14 16:45:04.772317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.295 [2024-10-14 16:45:04.772351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.295 [2024-10-14 16:45:04.772358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.295 [2024-10-14 16:45:04.772364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.295 [2024-10-14 16:45:04.772369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.295 [2024-10-14 16:45:04.773104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.553 [2024-10-14 16:45:04.985283] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.553 [2024-10-14 16:45:05.017310] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.553 [2024-10-14 16:45:05.017524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=565059 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 565059 /var/tmp/bdevperf.sock 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 565059 ']' 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.123 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:01.123 "subsystems": [ 00:20:01.123 { 00:20:01.123 "subsystem": "keyring", 00:20:01.123 "config": [ 00:20:01.123 { 00:20:01.123 "method": "keyring_file_add_key", 00:20:01.123 "params": { 00:20:01.123 "name": "key0", 00:20:01.123 "path": "/tmp/tmp.5NrG24K1Hc" 00:20:01.123 } 00:20:01.123 } 00:20:01.123 ] 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "subsystem": "iobuf", 00:20:01.123 "config": [ 00:20:01.123 { 00:20:01.123 "method": "iobuf_set_options", 00:20:01.123 "params": { 00:20:01.123 "small_pool_count": 8192, 00:20:01.123 "large_pool_count": 1024, 00:20:01.123 "small_bufsize": 8192, 00:20:01.123 "large_bufsize": 135168 00:20:01.123 } 00:20:01.123 } 00:20:01.123 ] 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "subsystem": "sock", 00:20:01.123 "config": [ 00:20:01.123 { 00:20:01.123 "method": "sock_set_default_impl", 00:20:01.123 "params": { 00:20:01.123 "impl_name": "posix" 00:20:01.123 } 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "method": "sock_impl_set_options", 00:20:01.123 "params": { 00:20:01.123 "impl_name": "ssl", 00:20:01.123 "recv_buf_size": 4096, 00:20:01.123 "send_buf_size": 4096, 00:20:01.123 "enable_recv_pipe": true, 00:20:01.123 "enable_quickack": false, 00:20:01.123 "enable_placement_id": 0, 00:20:01.123 "enable_zerocopy_send_server": true, 00:20:01.123 "enable_zerocopy_send_client": false, 00:20:01.123 "zerocopy_threshold": 0, 00:20:01.123 "tls_version": 0, 00:20:01.123 "enable_ktls": false 00:20:01.123 } 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "method": "sock_impl_set_options", 00:20:01.123 "params": { 00:20:01.123 "impl_name": "posix", 00:20:01.123 "recv_buf_size": 2097152, 00:20:01.123 "send_buf_size": 2097152, 00:20:01.123 "enable_recv_pipe": true, 00:20:01.123 "enable_quickack": false, 00:20:01.123 "enable_placement_id": 0, 00:20:01.123 "enable_zerocopy_send_server": true, 00:20:01.123 "enable_zerocopy_send_client": false, 00:20:01.123 "zerocopy_threshold": 0, 00:20:01.123 "tls_version": 0, 00:20:01.123 "enable_ktls": false 00:20:01.123 } 00:20:01.123 } 00:20:01.123 ] 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "subsystem": "vmd", 00:20:01.123 "config": [] 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "subsystem": "accel", 00:20:01.123 "config": [ 00:20:01.123 { 00:20:01.123 "method": "accel_set_options", 00:20:01.123 "params": { 00:20:01.123 "small_cache_size": 128, 00:20:01.123 "large_cache_size": 16, 00:20:01.123 "task_count": 2048, 00:20:01.123 "sequence_count": 2048, 00:20:01.123 "buf_count": 2048 00:20:01.123 } 00:20:01.123 } 00:20:01.123 ] 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "subsystem": "bdev", 00:20:01.123 "config": [ 00:20:01.123 { 00:20:01.123 "method": "bdev_set_options", 00:20:01.123 "params": { 00:20:01.123 "bdev_io_pool_size": 65535, 00:20:01.123 "bdev_io_cache_size": 256, 00:20:01.123 "bdev_auto_examine": true, 00:20:01.123 "iobuf_small_cache_size": 128, 00:20:01.123 "iobuf_large_cache_size": 16 00:20:01.123 } 00:20:01.123 }, 00:20:01.123 { 00:20:01.123 "method": "bdev_raid_set_options", 00:20:01.123 "params": { 00:20:01.123 "process_window_size_kb": 1024, 00:20:01.123 "process_max_bandwidth_mb_sec": 0 00:20:01.124 } 00:20:01.124 }, 00:20:01.124 { 00:20:01.124 "method": "bdev_iscsi_set_options", 00:20:01.124 "params": { 00:20:01.124 "timeout_sec": 30 00:20:01.124 } 00:20:01.124 }, 00:20:01.124 { 00:20:01.124 "method": "bdev_nvme_set_options", 00:20:01.124 "params": { 00:20:01.124 "action_on_timeout": "none", 00:20:01.124 "timeout_us": 0, 00:20:01.124 "timeout_admin_us": 0, 00:20:01.124 "keep_alive_timeout_ms": 10000, 00:20:01.124 "arbitration_burst": 0, 00:20:01.124 "low_priority_weight": 0, 00:20:01.124 "medium_priority_weight": 0, 00:20:01.124 "high_priority_weight": 0, 00:20:01.124 "nvme_adminq_poll_period_us": 10000, 00:20:01.124 "nvme_ioq_poll_period_us": 0, 00:20:01.124 "io_queue_requests": 512, 00:20:01.124 "delay_cmd_submit": true, 00:20:01.124 "transport_retry_count": 4, 00:20:01.124 "bdev_retry_count": 3, 00:20:01.124 "transport_ack_timeout": 0, 00:20:01.124 "ctrlr_loss_timeout_sec": 0, 00:20:01.124 "reconnect_delay_sec": 0, 00:20:01.124 "fast_io_fail_timeout_sec": 0, 00:20:01.124 "disable_auto_failback": false, 00:20:01.124 "generate_uuids": false, 00:20:01.124 "transport_tos": 0, 00:20:01.124 "nvme_error_stat": false, 00:20:01.124 "rdma_srq_size": 0, 00:20:01.124 "io_path_stat": false, 00:20:01.124 "allow_accel_sequence": false, 00:20:01.124 "rdma_max_cq_size": 0, 00:20:01.124 "rdma_cm_event_timeout_ms": 0, 00:20:01.124 "dhchap_digests": [ 00:20:01.124 "sha256", 00:20:01.124 "sha384", 00:20:01.124 "sha512" 00:20:01.124 ], 00:20:01.124 "dhchap_dhgroups": [ 00:20:01.124 "null", 00:20:01.124 "ffdhe2048", 00:20:01.124 "ffdhe3072", 00:20:01.124 "ffdhe4096", 00:20:01.124 "ffdhe6144", 00:20:01.124 "ffdhe8192" 00:20:01.124 ] 00:20:01.124 } 00:20:01.124 }, 00:20:01.124 { 00:20:01.124 "method": "bdev_nvme_attach_controller", 00:20:01.124 "params": { 00:20:01.124 "name": "TLSTEST", 00:20:01.124 "trtype": "TCP", 00:20:01.124 "adrfam": "IPv4", 00:20:01.124 "traddr": "10.0.0.2", 00:20:01.124 "trsvcid": "4420", 00:20:01.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.124 "prchk_reftag": false, 00:20:01.124 "prchk_guard": false, 00:20:01.124 "ctrlr_loss_timeout_sec": 0, 00:20:01.124 "reconnect_delay_sec": 0, 00:20:01.124 "fast_io_fail_timeout_sec": 0, 00:20:01.124 "psk": "key0", 00:20:01.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.124 "hdgst": false, 00:20:01.124 "ddgst": false, 00:20:01.124 "multipath": "multipath" 00:20:01.124 } 00:20:01.124 }, 00:20:01.124 { 00:20:01.124 "method": "bdev_nvme_set_hotplug", 00:20:01.124 "params": { 00:20:01.124 "period_us": 100000, 00:20:01.124 "enable": false 00:20:01.124 } 00:20:01.124 }, 00:20:01.124 { 00:20:01.124 "method": "bdev_wait_for_examine" 00:20:01.124 } 00:20:01.124 ] 00:20:01.124 }, 00:20:01.124 { 00:20:01.124 "subsystem": "nbd", 00:20:01.124 "config": [] 00:20:01.124 } 00:20:01.124 ] 00:20:01.124 }' 00:20:01.124 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.124 16:45:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.124 [2024-10-14 16:45:05.583035] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:01.124 [2024-10-14 16:45:05.583081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565059 ] 00:20:01.124 [2024-10-14 16:45:05.651428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.124 [2024-10-14 16:45:05.693934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.382 [2024-10-14 16:45:05.846641] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.949 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.949 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.949 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:01.949 Running I/O for 10 seconds... 00:20:04.259 5331.00 IOPS, 20.82 MiB/s [2024-10-14T14:45:09.830Z] 5401.00 IOPS, 21.10 MiB/s [2024-10-14T14:45:10.766Z] 5226.00 IOPS, 20.41 MiB/s [2024-10-14T14:45:11.700Z] 5167.00 IOPS, 20.18 MiB/s [2024-10-14T14:45:12.636Z] 5136.80 IOPS, 20.07 MiB/s [2024-10-14T14:45:13.570Z] 5099.50 IOPS, 19.92 MiB/s [2024-10-14T14:45:14.943Z] 5085.43 IOPS, 19.86 MiB/s [2024-10-14T14:45:15.879Z] 5071.75 IOPS, 19.81 MiB/s [2024-10-14T14:45:16.813Z] 5009.00 IOPS, 19.57 MiB/s [2024-10-14T14:45:16.813Z] 5015.50 IOPS, 19.59 MiB/s 00:20:12.179 Latency(us) 00:20:12.179 [2024-10-14T14:45:16.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.179 Verification LBA range: start 0x0 length 0x2000 00:20:12.179 TLSTESTn1 : 10.02 5019.82 19.61 0.00 0.00 25462.93 6709.64 29335.16 00:20:12.179 [2024-10-14T14:45:16.813Z] =================================================================================================================== 00:20:12.179 [2024-10-14T14:45:16.813Z] Total : 5019.82 19.61 0.00 0.00 25462.93 6709.64 29335.16 00:20:12.179 { 00:20:12.179 "results": [ 00:20:12.179 { 00:20:12.179 "job": "TLSTESTn1", 00:20:12.179 "core_mask": "0x4", 00:20:12.179 "workload": "verify", 00:20:12.179 "status": "finished", 00:20:12.179 "verify_range": { 00:20:12.179 "start": 0, 00:20:12.179 "length": 8192 00:20:12.179 }, 00:20:12.179 "queue_depth": 128, 00:20:12.179 "io_size": 4096, 00:20:12.179 "runtime": 10.016889, 00:20:12.179 "iops": 5019.822022586054, 00:20:12.179 "mibps": 19.608679775726774, 00:20:12.179 "io_failed": 0, 00:20:12.179 "io_timeout": 0, 00:20:12.179 "avg_latency_us": 25462.931791734973, 00:20:12.179 "min_latency_us": 6709.638095238095, 00:20:12.179 "max_latency_us": 29335.161904761906 00:20:12.179 } 00:20:12.179 ], 00:20:12.179 "core_count": 1 00:20:12.179 } 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 565059 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 565059 ']' 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 565059 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 565059 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 565059' 00:20:12.179 killing process with pid 565059 00:20:12.179 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 565059 00:20:12.180 Received shutdown signal, test time was about 10.000000 seconds 00:20:12.180 00:20:12.180 Latency(us) 00:20:12.180 [2024-10-14T14:45:16.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.180 [2024-10-14T14:45:16.814Z] =================================================================================================================== 00:20:12.180 [2024-10-14T14:45:16.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 565059 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 565030 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 565030 ']' 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 565030 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.180 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 565030 00:20:12.438 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:12.438 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:12.438 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 565030' 00:20:12.438 killing process with pid 565030 00:20:12.438 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 565030 00:20:12.438 16:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 565030 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=567295 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 567295 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 567295 ']' 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.438 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.438 [2024-10-14 16:45:17.071121] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:12.438 [2024-10-14 16:45:17.071172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.697 [2024-10-14 16:45:17.142824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.697 [2024-10-14 16:45:17.182672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.697 [2024-10-14 16:45:17.182706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.697 [2024-10-14 16:45:17.182713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.697 [2024-10-14 16:45:17.182719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.697 [2024-10-14 16:45:17.182723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.697 [2024-10-14 16:45:17.183285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.5NrG24K1Hc 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5NrG24K1Hc 00:20:12.697 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:12.955 [2024-10-14 16:45:17.491246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.955 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:13.213 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:13.472 [2024-10-14 16:45:17.892302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.472 [2024-10-14 16:45:17.892544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.472 16:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:13.472 malloc0 00:20:13.472 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:13.730 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:20:13.988 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=567679 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 567679 /var/tmp/bdevperf.sock 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 567679 ']' 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.247 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.247 [2024-10-14 16:45:18.745530] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:14.247 [2024-10-14 16:45:18.745579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567679 ] 00:20:14.247 [2024-10-14 16:45:18.797248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.247 [2024-10-14 16:45:18.840270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:14.506 16:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:20:14.506 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:14.764 [2024-10-14 16:45:19.311015] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.764 nvme0n1 00:20:14.764 16:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.022 Running I/O for 1 seconds... 00:20:15.957 5344.00 IOPS, 20.88 MiB/s 00:20:15.957 Latency(us) 00:20:15.957 [2024-10-14T14:45:20.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.957 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.957 Verification LBA range: start 0x0 length 0x2000 00:20:15.957 nvme0n1 : 1.01 5402.60 21.10 0.00 0.00 23533.64 5118.05 28086.86 00:20:15.957 [2024-10-14T14:45:20.591Z] =================================================================================================================== 00:20:15.957 [2024-10-14T14:45:20.591Z] Total : 5402.60 21.10 0.00 0.00 23533.64 5118.05 28086.86 00:20:15.957 { 00:20:15.957 "results": [ 00:20:15.957 { 00:20:15.957 "job": "nvme0n1", 00:20:15.957 "core_mask": "0x2", 00:20:15.957 "workload": "verify", 00:20:15.957 "status": "finished", 00:20:15.957 "verify_range": { 00:20:15.957 "start": 0, 00:20:15.957 "length": 8192 00:20:15.957 }, 00:20:15.957 "queue_depth": 128, 00:20:15.957 "io_size": 4096, 00:20:15.957 "runtime": 1.012845, 00:20:15.957 "iops": 5402.603557306399, 00:20:15.957 "mibps": 21.10392014572812, 00:20:15.957 "io_failed": 0, 00:20:15.957 "io_timeout": 0, 00:20:15.957 "avg_latency_us": 23533.64157059315, 00:20:15.957 "min_latency_us": 5118.049523809524, 00:20:15.957 "max_latency_us": 28086.85714285714 00:20:15.957 } 00:20:15.957 ], 00:20:15.957 "core_count": 1 00:20:15.957 } 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 567679 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 567679 ']' 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 567679 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 567679 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 567679' 00:20:15.957 killing process with pid 567679 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 567679 00:20:15.957 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.957 00:20:15.957 Latency(us) 00:20:15.957 [2024-10-14T14:45:20.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.957 [2024-10-14T14:45:20.591Z] =================================================================================================================== 00:20:15.957 [2024-10-14T14:45:20.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.957 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 567679 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 567295 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 567295 ']' 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 567295 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 567295 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 567295' 00:20:16.216 killing process with pid 567295 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 567295 00:20:16.216 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 567295 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=568016 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 568016 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 568016 ']' 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.474 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.474 [2024-10-14 16:45:20.971700] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:16.474 [2024-10-14 16:45:20.971744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.474 [2024-10-14 16:45:21.023775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.474 [2024-10-14 16:45:21.065805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.474 [2024-10-14 16:45:21.065836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.474 [2024-10-14 16:45:21.065843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.474 [2024-10-14 16:45:21.065849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.474 [2024-10-14 16:45:21.065854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.474 [2024-10-14 16:45:21.066400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.732 [2024-10-14 16:45:21.200779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.732 malloc0 00:20:16.732 [2024-10-14 16:45:21.228777] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.732 [2024-10-14 16:45:21.228978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=568043 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 568043 /var/tmp/bdevperf.sock 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 568043 ']' 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.732 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.732 [2024-10-14 16:45:21.301679] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:16.732 [2024-10-14 16:45:21.301721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568043 ] 00:20:16.991 [2024-10-14 16:45:21.368779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.991 [2024-10-14 16:45:21.408807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.991 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.991 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.991 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5NrG24K1Hc 00:20:17.271 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:17.271 [2024-10-14 16:45:21.863886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.537 nvme0n1 00:20:17.538 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.538 Running I/O for 1 seconds... 00:20:18.503 5475.00 IOPS, 21.39 MiB/s 00:20:18.503 Latency(us) 00:20:18.503 [2024-10-14T14:45:23.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.503 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.503 Verification LBA range: start 0x0 length 0x2000 00:20:18.503 nvme0n1 : 1.01 5527.99 21.59 0.00 0.00 22995.18 5149.26 23967.45 00:20:18.503 [2024-10-14T14:45:23.137Z] =================================================================================================================== 00:20:18.503 [2024-10-14T14:45:23.138Z] Total : 5527.99 21.59 0.00 0.00 22995.18 5149.26 23967.45 00:20:18.504 { 00:20:18.504 "results": [ 00:20:18.504 { 00:20:18.504 "job": "nvme0n1", 00:20:18.504 "core_mask": "0x2", 00:20:18.504 "workload": "verify", 00:20:18.504 "status": "finished", 00:20:18.504 "verify_range": { 00:20:18.504 "start": 0, 00:20:18.504 "length": 8192 00:20:18.504 }, 00:20:18.504 "queue_depth": 128, 00:20:18.504 "io_size": 4096, 00:20:18.504 "runtime": 1.01357, 00:20:18.504 "iops": 5527.985240289275, 00:20:18.504 "mibps": 21.59369234487998, 00:20:18.504 "io_failed": 0, 00:20:18.504 "io_timeout": 0, 00:20:18.504 "avg_latency_us": 22995.175077297026, 00:20:18.504 "min_latency_us": 5149.257142857143, 00:20:18.504 "max_latency_us": 23967.45142857143 00:20:18.504 } 00:20:18.504 ], 00:20:18.504 "core_count": 1 00:20:18.504 } 00:20:18.504 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:18.504 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.504 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.763 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.763 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:18.763 "subsystems": [ 00:20:18.763 { 00:20:18.763 "subsystem": "keyring", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "keyring_file_add_key", 00:20:18.763 "params": { 00:20:18.763 "name": "key0", 00:20:18.763 "path": "/tmp/tmp.5NrG24K1Hc" 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "iobuf", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "iobuf_set_options", 00:20:18.763 "params": { 00:20:18.763 "small_pool_count": 8192, 00:20:18.763 "large_pool_count": 1024, 00:20:18.763 "small_bufsize": 8192, 00:20:18.763 "large_bufsize": 135168 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "sock", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "sock_set_default_impl", 00:20:18.763 "params": { 00:20:18.763 "impl_name": "posix" 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "sock_impl_set_options", 00:20:18.763 "params": { 00:20:18.763 "impl_name": "ssl", 00:20:18.763 "recv_buf_size": 4096, 00:20:18.763 "send_buf_size": 4096, 00:20:18.763 "enable_recv_pipe": true, 00:20:18.763 "enable_quickack": false, 00:20:18.763 "enable_placement_id": 0, 00:20:18.763 "enable_zerocopy_send_server": true, 00:20:18.763 "enable_zerocopy_send_client": false, 00:20:18.763 "zerocopy_threshold": 0, 00:20:18.763 "tls_version": 0, 00:20:18.763 "enable_ktls": false 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "sock_impl_set_options", 00:20:18.763 "params": { 00:20:18.763 "impl_name": "posix", 00:20:18.763 "recv_buf_size": 2097152, 00:20:18.763 "send_buf_size": 2097152, 00:20:18.763 "enable_recv_pipe": true, 00:20:18.763 "enable_quickack": false, 00:20:18.763 "enable_placement_id": 0, 00:20:18.763 "enable_zerocopy_send_server": true, 00:20:18.763 "enable_zerocopy_send_client": false, 00:20:18.763 "zerocopy_threshold": 0, 00:20:18.763 "tls_version": 0, 00:20:18.763 "enable_ktls": false 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "vmd", 00:20:18.763 "config": [] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "accel", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "accel_set_options", 00:20:18.763 "params": { 00:20:18.763 "small_cache_size": 128, 00:20:18.763 "large_cache_size": 16, 00:20:18.763 "task_count": 2048, 00:20:18.763 "sequence_count": 2048, 00:20:18.763 "buf_count": 2048 00:20:18.763 } 00:20:18.763 } 00:20:18.763 ] 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "subsystem": "bdev", 00:20:18.763 "config": [ 00:20:18.763 { 00:20:18.763 "method": "bdev_set_options", 00:20:18.763 "params": { 00:20:18.763 "bdev_io_pool_size": 65535, 00:20:18.763 "bdev_io_cache_size": 256, 00:20:18.763 "bdev_auto_examine": true, 00:20:18.763 "iobuf_small_cache_size": 128, 00:20:18.763 "iobuf_large_cache_size": 16 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "bdev_raid_set_options", 00:20:18.763 "params": { 00:20:18.763 "process_window_size_kb": 1024, 00:20:18.763 "process_max_bandwidth_mb_sec": 0 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "bdev_iscsi_set_options", 00:20:18.763 "params": { 00:20:18.763 "timeout_sec": 30 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "bdev_nvme_set_options", 00:20:18.763 "params": { 00:20:18.763 "action_on_timeout": "none", 00:20:18.763 "timeout_us": 0, 00:20:18.763 "timeout_admin_us": 0, 00:20:18.763 "keep_alive_timeout_ms": 10000, 00:20:18.763 "arbitration_burst": 0, 00:20:18.763 "low_priority_weight": 0, 00:20:18.763 "medium_priority_weight": 0, 00:20:18.763 "high_priority_weight": 0, 00:20:18.763 "nvme_adminq_poll_period_us": 10000, 00:20:18.763 "nvme_ioq_poll_period_us": 0, 00:20:18.763 "io_queue_requests": 0, 00:20:18.763 "delay_cmd_submit": true, 00:20:18.763 "transport_retry_count": 4, 00:20:18.763 "bdev_retry_count": 3, 00:20:18.763 "transport_ack_timeout": 0, 00:20:18.763 "ctrlr_loss_timeout_sec": 0, 00:20:18.763 "reconnect_delay_sec": 0, 00:20:18.763 "fast_io_fail_timeout_sec": 0, 00:20:18.763 "disable_auto_failback": false, 00:20:18.763 "generate_uuids": false, 00:20:18.763 "transport_tos": 0, 00:20:18.763 "nvme_error_stat": false, 00:20:18.763 "rdma_srq_size": 0, 00:20:18.763 "io_path_stat": false, 00:20:18.763 "allow_accel_sequence": false, 00:20:18.763 "rdma_max_cq_size": 0, 00:20:18.763 "rdma_cm_event_timeout_ms": 0, 00:20:18.763 "dhchap_digests": [ 00:20:18.763 "sha256", 00:20:18.763 "sha384", 00:20:18.763 "sha512" 00:20:18.763 ], 00:20:18.763 "dhchap_dhgroups": [ 00:20:18.763 "null", 00:20:18.763 "ffdhe2048", 00:20:18.763 "ffdhe3072", 00:20:18.763 "ffdhe4096", 00:20:18.763 "ffdhe6144", 00:20:18.763 "ffdhe8192" 00:20:18.763 ] 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "bdev_nvme_set_hotplug", 00:20:18.763 "params": { 00:20:18.763 "period_us": 100000, 00:20:18.763 "enable": false 00:20:18.763 } 00:20:18.763 }, 00:20:18.763 { 00:20:18.763 "method": "bdev_malloc_create", 00:20:18.763 "params": { 00:20:18.764 "name": "malloc0", 00:20:18.764 "num_blocks": 8192, 00:20:18.764 "block_size": 4096, 00:20:18.764 "physical_block_size": 4096, 00:20:18.764 "uuid": "44ac7734-5d24-4089-a7e0-37280b1845ae", 00:20:18.764 "optimal_io_boundary": 0, 00:20:18.764 "md_size": 0, 00:20:18.764 "dif_type": 0, 00:20:18.764 "dif_is_head_of_md": false, 00:20:18.764 "dif_pi_format": 0 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "bdev_wait_for_examine" 00:20:18.764 } 00:20:18.764 ] 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "subsystem": "nbd", 00:20:18.764 "config": [] 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "subsystem": "scheduler", 00:20:18.764 "config": [ 00:20:18.764 { 00:20:18.764 "method": "framework_set_scheduler", 00:20:18.764 "params": { 00:20:18.764 "name": "static" 00:20:18.764 } 00:20:18.764 } 00:20:18.764 ] 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "subsystem": "nvmf", 00:20:18.764 "config": [ 00:20:18.764 { 00:20:18.764 "method": "nvmf_set_config", 00:20:18.764 "params": { 00:20:18.764 "discovery_filter": "match_any", 00:20:18.764 "admin_cmd_passthru": { 00:20:18.764 "identify_ctrlr": false 00:20:18.764 }, 00:20:18.764 "dhchap_digests": [ 00:20:18.764 "sha256", 00:20:18.764 "sha384", 00:20:18.764 "sha512" 00:20:18.764 ], 00:20:18.764 "dhchap_dhgroups": [ 00:20:18.764 "null", 00:20:18.764 "ffdhe2048", 00:20:18.764 "ffdhe3072", 00:20:18.764 "ffdhe4096", 00:20:18.764 "ffdhe6144", 00:20:18.764 "ffdhe8192" 00:20:18.764 ] 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_set_max_subsystems", 00:20:18.764 "params": { 00:20:18.764 "max_subsystems": 1024 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_set_crdt", 00:20:18.764 "params": { 00:20:18.764 "crdt1": 0, 00:20:18.764 "crdt2": 0, 00:20:18.764 "crdt3": 0 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_create_transport", 00:20:18.764 "params": { 00:20:18.764 "trtype": "TCP", 00:20:18.764 "max_queue_depth": 128, 00:20:18.764 "max_io_qpairs_per_ctrlr": 127, 00:20:18.764 "in_capsule_data_size": 4096, 00:20:18.764 "max_io_size": 131072, 00:20:18.764 "io_unit_size": 131072, 00:20:18.764 "max_aq_depth": 128, 00:20:18.764 "num_shared_buffers": 511, 00:20:18.764 "buf_cache_size": 4294967295, 00:20:18.764 "dif_insert_or_strip": false, 00:20:18.764 "zcopy": false, 00:20:18.764 "c2h_success": false, 00:20:18.764 "sock_priority": 0, 00:20:18.764 "abort_timeout_sec": 1, 00:20:18.764 "ack_timeout": 0, 00:20:18.764 "data_wr_pool_size": 0 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_create_subsystem", 00:20:18.764 "params": { 00:20:18.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.764 "allow_any_host": false, 00:20:18.764 "serial_number": "00000000000000000000", 00:20:18.764 "model_number": "SPDK bdev Controller", 00:20:18.764 "max_namespaces": 32, 00:20:18.764 "min_cntlid": 1, 00:20:18.764 "max_cntlid": 65519, 00:20:18.764 "ana_reporting": false 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_subsystem_add_host", 00:20:18.764 "params": { 00:20:18.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.764 "host": "nqn.2016-06.io.spdk:host1", 00:20:18.764 "psk": "key0" 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_subsystem_add_ns", 00:20:18.764 "params": { 00:20:18.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.764 "namespace": { 00:20:18.764 "nsid": 1, 00:20:18.764 "bdev_name": "malloc0", 00:20:18.764 "nguid": "44AC77345D244089A7E037280B1845AE", 00:20:18.764 "uuid": "44ac7734-5d24-4089-a7e0-37280b1845ae", 00:20:18.764 "no_auto_visible": false 00:20:18.764 } 00:20:18.764 } 00:20:18.764 }, 00:20:18.764 { 00:20:18.764 "method": "nvmf_subsystem_add_listener", 00:20:18.764 "params": { 00:20:18.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.764 "listen_address": { 00:20:18.764 "trtype": "TCP", 00:20:18.764 "adrfam": "IPv4", 00:20:18.764 "traddr": "10.0.0.2", 00:20:18.764 "trsvcid": "4420" 00:20:18.764 }, 00:20:18.764 "secure_channel": false, 00:20:18.764 "sock_impl": "ssl" 00:20:18.764 } 00:20:18.764 } 00:20:18.764 ] 00:20:18.764 } 00:20:18.764 ] 00:20:18.764 }' 00:20:18.764 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:19.023 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:19.023 "subsystems": [ 00:20:19.023 { 00:20:19.023 "subsystem": "keyring", 00:20:19.023 "config": [ 00:20:19.023 { 00:20:19.023 "method": "keyring_file_add_key", 00:20:19.023 "params": { 00:20:19.023 "name": "key0", 00:20:19.023 "path": "/tmp/tmp.5NrG24K1Hc" 00:20:19.023 } 00:20:19.023 } 00:20:19.023 ] 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "subsystem": "iobuf", 00:20:19.023 "config": [ 00:20:19.023 { 00:20:19.023 "method": "iobuf_set_options", 00:20:19.023 "params": { 00:20:19.023 "small_pool_count": 8192, 00:20:19.023 "large_pool_count": 1024, 00:20:19.023 "small_bufsize": 8192, 00:20:19.023 "large_bufsize": 135168 00:20:19.023 } 00:20:19.023 } 00:20:19.023 ] 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "subsystem": "sock", 00:20:19.023 "config": [ 00:20:19.023 { 00:20:19.023 "method": "sock_set_default_impl", 00:20:19.023 "params": { 00:20:19.023 "impl_name": "posix" 00:20:19.023 } 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "method": "sock_impl_set_options", 00:20:19.023 "params": { 00:20:19.023 "impl_name": "ssl", 00:20:19.023 "recv_buf_size": 4096, 00:20:19.023 "send_buf_size": 4096, 00:20:19.023 "enable_recv_pipe": true, 00:20:19.023 "enable_quickack": false, 00:20:19.023 "enable_placement_id": 0, 00:20:19.023 "enable_zerocopy_send_server": true, 00:20:19.023 "enable_zerocopy_send_client": false, 00:20:19.023 "zerocopy_threshold": 0, 00:20:19.023 "tls_version": 0, 00:20:19.023 "enable_ktls": false 00:20:19.023 } 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "method": "sock_impl_set_options", 00:20:19.023 "params": { 00:20:19.023 "impl_name": "posix", 00:20:19.023 "recv_buf_size": 2097152, 00:20:19.023 "send_buf_size": 2097152, 00:20:19.023 "enable_recv_pipe": true, 00:20:19.023 "enable_quickack": false, 00:20:19.023 "enable_placement_id": 0, 00:20:19.023 "enable_zerocopy_send_server": true, 00:20:19.023 "enable_zerocopy_send_client": false, 00:20:19.023 "zerocopy_threshold": 0, 00:20:19.023 "tls_version": 0, 00:20:19.023 "enable_ktls": false 00:20:19.023 } 00:20:19.023 } 00:20:19.023 ] 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "subsystem": "vmd", 00:20:19.023 "config": [] 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "subsystem": "accel", 00:20:19.023 "config": [ 00:20:19.023 { 00:20:19.023 "method": "accel_set_options", 00:20:19.023 "params": { 00:20:19.023 "small_cache_size": 128, 00:20:19.023 "large_cache_size": 16, 00:20:19.023 "task_count": 2048, 00:20:19.023 "sequence_count": 2048, 00:20:19.023 "buf_count": 2048 00:20:19.023 } 00:20:19.023 } 00:20:19.023 ] 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "subsystem": "bdev", 00:20:19.023 "config": [ 00:20:19.023 { 00:20:19.023 "method": "bdev_set_options", 00:20:19.023 "params": { 00:20:19.023 "bdev_io_pool_size": 65535, 00:20:19.023 "bdev_io_cache_size": 256, 00:20:19.023 "bdev_auto_examine": true, 00:20:19.023 "iobuf_small_cache_size": 128, 00:20:19.023 "iobuf_large_cache_size": 16 00:20:19.023 } 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "method": "bdev_raid_set_options", 00:20:19.023 "params": { 00:20:19.023 "process_window_size_kb": 1024, 00:20:19.023 "process_max_bandwidth_mb_sec": 0 00:20:19.023 } 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "method": "bdev_iscsi_set_options", 00:20:19.023 "params": { 00:20:19.023 "timeout_sec": 30 00:20:19.023 } 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "method": "bdev_nvme_set_options", 00:20:19.023 "params": { 00:20:19.023 "action_on_timeout": "none", 00:20:19.023 "timeout_us": 0, 00:20:19.023 "timeout_admin_us": 0, 00:20:19.023 "keep_alive_timeout_ms": 10000, 00:20:19.023 "arbitration_burst": 0, 00:20:19.023 "low_priority_weight": 0, 00:20:19.023 "medium_priority_weight": 0, 00:20:19.023 "high_priority_weight": 0, 00:20:19.023 "nvme_adminq_poll_period_us": 10000, 00:20:19.023 "nvme_ioq_poll_period_us": 0, 00:20:19.023 "io_queue_requests": 512, 00:20:19.023 "delay_cmd_submit": true, 00:20:19.023 "transport_retry_count": 4, 00:20:19.023 "bdev_retry_count": 3, 00:20:19.023 "transport_ack_timeout": 0, 00:20:19.023 "ctrlr_loss_timeout_sec": 0, 00:20:19.023 "reconnect_delay_sec": 0, 00:20:19.023 "fast_io_fail_timeout_sec": 0, 00:20:19.023 "disable_auto_failback": false, 00:20:19.023 "generate_uuids": false, 00:20:19.023 "transport_tos": 0, 00:20:19.023 "nvme_error_stat": false, 00:20:19.023 "rdma_srq_size": 0, 00:20:19.023 "io_path_stat": false, 00:20:19.023 "allow_accel_sequence": false, 00:20:19.023 "rdma_max_cq_size": 0, 00:20:19.023 "rdma_cm_event_timeout_ms": 0, 00:20:19.023 "dhchap_digests": [ 00:20:19.023 "sha256", 00:20:19.023 "sha384", 00:20:19.023 "sha512" 00:20:19.023 ], 00:20:19.023 "dhchap_dhgroups": [ 00:20:19.023 "null", 00:20:19.023 "ffdhe2048", 00:20:19.023 "ffdhe3072", 00:20:19.023 "ffdhe4096", 00:20:19.023 "ffdhe6144", 00:20:19.023 "ffdhe8192" 00:20:19.023 ] 00:20:19.023 } 00:20:19.023 }, 00:20:19.023 { 00:20:19.023 "method": "bdev_nvme_attach_controller", 00:20:19.023 "params": { 00:20:19.023 "name": "nvme0", 00:20:19.023 "trtype": "TCP", 00:20:19.023 "adrfam": "IPv4", 00:20:19.023 "traddr": "10.0.0.2", 00:20:19.023 "trsvcid": "4420", 00:20:19.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.024 "prchk_reftag": false, 00:20:19.024 "prchk_guard": false, 00:20:19.024 "ctrlr_loss_timeout_sec": 0, 00:20:19.024 "reconnect_delay_sec": 0, 00:20:19.024 "fast_io_fail_timeout_sec": 0, 00:20:19.024 "psk": "key0", 00:20:19.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.024 "hdgst": false, 00:20:19.024 "ddgst": false, 00:20:19.024 "multipath": "multipath" 00:20:19.024 } 00:20:19.024 }, 00:20:19.024 { 00:20:19.024 "method": "bdev_nvme_set_hotplug", 00:20:19.024 "params": { 00:20:19.024 "period_us": 100000, 00:20:19.024 "enable": false 00:20:19.024 } 00:20:19.024 }, 00:20:19.024 { 00:20:19.024 "method": "bdev_enable_histogram", 00:20:19.024 "params": { 00:20:19.024 "name": "nvme0n1", 00:20:19.024 "enable": true 00:20:19.024 } 00:20:19.024 }, 00:20:19.024 { 00:20:19.024 "method": "bdev_wait_for_examine" 00:20:19.024 } 00:20:19.024 ] 00:20:19.024 }, 00:20:19.024 { 00:20:19.024 "subsystem": "nbd", 00:20:19.024 "config": [] 00:20:19.024 } 00:20:19.024 ] 00:20:19.024 }' 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 568043 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 568043 ']' 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 568043 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568043 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568043' 00:20:19.024 killing process with pid 568043 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 568043 00:20:19.024 Received shutdown signal, test time was about 1.000000 seconds 00:20:19.024 00:20:19.024 Latency(us) 00:20:19.024 [2024-10-14T14:45:23.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.024 [2024-10-14T14:45:23.658Z] =================================================================================================================== 00:20:19.024 [2024-10-14T14:45:23.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 568043 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 568016 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 568016 ']' 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 568016 00:20:19.024 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568016 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568016' 00:20:19.283 killing process with pid 568016 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 568016 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 568016 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:19.283 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:19.283 "subsystems": [ 00:20:19.283 { 00:20:19.283 "subsystem": "keyring", 00:20:19.283 "config": [ 00:20:19.283 { 00:20:19.283 "method": "keyring_file_add_key", 00:20:19.283 "params": { 00:20:19.283 "name": "key0", 00:20:19.283 "path": "/tmp/tmp.5NrG24K1Hc" 00:20:19.283 } 00:20:19.283 } 00:20:19.283 ] 00:20:19.283 }, 00:20:19.283 { 00:20:19.283 "subsystem": "iobuf", 00:20:19.283 "config": [ 00:20:19.283 { 00:20:19.283 "method": "iobuf_set_options", 00:20:19.283 "params": { 00:20:19.283 "small_pool_count": 8192, 00:20:19.283 "large_pool_count": 1024, 00:20:19.283 "small_bufsize": 8192, 00:20:19.283 "large_bufsize": 135168 00:20:19.283 } 00:20:19.283 } 00:20:19.283 ] 00:20:19.283 }, 00:20:19.283 { 00:20:19.283 "subsystem": "sock", 00:20:19.283 "config": [ 00:20:19.283 { 00:20:19.283 "method": "sock_set_default_impl", 00:20:19.283 "params": { 00:20:19.283 "impl_name": "posix" 00:20:19.283 } 00:20:19.283 }, 00:20:19.283 { 00:20:19.283 "method": "sock_impl_set_options", 00:20:19.283 "params": { 00:20:19.283 "impl_name": "ssl", 00:20:19.283 "recv_buf_size": 4096, 00:20:19.283 "send_buf_size": 4096, 00:20:19.283 "enable_recv_pipe": true, 00:20:19.283 "enable_quickack": false, 00:20:19.283 "enable_placement_id": 0, 00:20:19.283 "enable_zerocopy_send_server": true, 00:20:19.283 "enable_zerocopy_send_client": false, 00:20:19.283 "zerocopy_threshold": 0, 00:20:19.283 "tls_version": 0, 00:20:19.283 "enable_ktls": false 00:20:19.283 } 00:20:19.283 }, 00:20:19.283 { 00:20:19.283 "method": "sock_impl_set_options", 00:20:19.283 "params": { 00:20:19.283 "impl_name": "posix", 00:20:19.283 "recv_buf_size": 2097152, 00:20:19.283 "send_buf_size": 2097152, 00:20:19.283 "enable_recv_pipe": true, 00:20:19.283 "enable_quickack": false, 00:20:19.283 "enable_placement_id": 0, 00:20:19.283 "enable_zerocopy_send_server": true, 00:20:19.283 "enable_zerocopy_send_client": false, 00:20:19.284 "zerocopy_threshold": 0, 00:20:19.284 "tls_version": 0, 00:20:19.284 "enable_ktls": false 00:20:19.284 } 00:20:19.284 } 00:20:19.284 ] 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "subsystem": "vmd", 00:20:19.284 "config": [] 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "subsystem": "accel", 00:20:19.284 "config": [ 00:20:19.284 { 00:20:19.284 "method": "accel_set_options", 00:20:19.284 "params": { 00:20:19.284 "small_cache_size": 128, 00:20:19.284 "large_cache_size": 16, 00:20:19.284 "task_count": 2048, 00:20:19.284 "sequence_count": 2048, 00:20:19.284 "buf_count": 2048 00:20:19.284 } 00:20:19.284 } 00:20:19.284 ] 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "subsystem": "bdev", 00:20:19.284 "config": [ 00:20:19.284 { 00:20:19.284 "method": "bdev_set_options", 00:20:19.284 "params": { 00:20:19.284 "bdev_io_pool_size": 65535, 00:20:19.284 "bdev_io_cache_size": 256, 00:20:19.284 "bdev_auto_examine": true, 00:20:19.284 "iobuf_small_cache_size": 128, 00:20:19.284 "iobuf_large_cache_size": 16 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "bdev_raid_set_options", 00:20:19.284 "params": { 00:20:19.284 "process_window_size_kb": 1024, 00:20:19.284 "process_max_bandwidth_mb_sec": 0 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "bdev_iscsi_set_options", 00:20:19.284 "params": { 00:20:19.284 "timeout_sec": 30 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "bdev_nvme_set_options", 00:20:19.284 "params": { 00:20:19.284 "action_on_timeout": "none", 00:20:19.284 "timeout_us": 0, 00:20:19.284 "timeout_admin_us": 0, 00:20:19.284 "keep_alive_timeout_ms": 10000, 00:20:19.284 "arbitration_burst": 0, 00:20:19.284 "low_priority_weight": 0, 00:20:19.284 "medium_priority_weight": 0, 00:20:19.284 "high_priority_weight": 0, 00:20:19.284 "nvme_adminq_poll_period_us": 10000, 00:20:19.284 "nvme_ioq_poll_period_us": 0, 00:20:19.284 "io_queue_requests": 0, 00:20:19.284 "delay_cmd_submit": true, 00:20:19.284 "transport_retry_count": 4, 00:20:19.284 "bdev_retry_count": 3, 00:20:19.284 "transport_ack_timeout": 0, 00:20:19.284 "ctrlr_loss_timeout_sec": 0, 00:20:19.284 "reconnect_delay_sec": 0, 00:20:19.284 "fast_io_fail_timeout_sec": 0, 00:20:19.284 "disable_auto_failback": false, 00:20:19.284 "generate_uuids": false, 00:20:19.284 "transport_tos": 0, 00:20:19.284 "nvme_error_stat": false, 00:20:19.284 "rdma_srq_size": 0, 00:20:19.284 "io_path_stat": false, 00:20:19.284 "allow_accel_sequence": false, 00:20:19.284 "rdma_max_cq_size": 0, 00:20:19.284 "rdma_cm_event_timeout_ms": 0, 00:20:19.284 "dhchap_digests": [ 00:20:19.284 "sha256", 00:20:19.284 "sha384", 00:20:19.284 "sha512" 00:20:19.284 ], 00:20:19.284 "dhchap_dhgroups": [ 00:20:19.284 "null", 00:20:19.284 "ffdhe2048", 00:20:19.284 "ffdhe3072", 00:20:19.284 "ffdhe4096", 00:20:19.284 "ffdhe6144", 00:20:19.284 "ffdhe8192" 00:20:19.284 ] 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "bdev_nvme_set_hotplug", 00:20:19.284 "params": { 00:20:19.284 "period_us": 100000, 00:20:19.284 "enable": false 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "bdev_malloc_create", 00:20:19.284 "params": { 00:20:19.284 "name": "malloc0", 00:20:19.284 "num_blocks": 8192, 00:20:19.284 "block_size": 4096, 00:20:19.284 "physical_block_size": 4096, 00:20:19.284 "uuid": "44ac7734-5d24-4089-a7e0-37280b1845ae", 00:20:19.284 "optimal_io_boundary": 0, 00:20:19.284 "md_size": 0, 00:20:19.284 "dif_type": 0, 00:20:19.284 "dif_is_head_of_md": false, 00:20:19.284 "dif_pi_format": 0 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "bdev_wait_for_examine" 00:20:19.284 } 00:20:19.284 ] 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "subsystem": "nbd", 00:20:19.284 "config": [] 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "subsystem": "scheduler", 00:20:19.284 "config": [ 00:20:19.284 { 00:20:19.284 "method": "framework_set_scheduler", 00:20:19.284 "params": { 00:20:19.284 "name": "static" 00:20:19.284 } 00:20:19.284 } 00:20:19.284 ] 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "subsystem": "nvmf", 00:20:19.284 "config": [ 00:20:19.284 { 00:20:19.284 "method": "nvmf_set_config", 00:20:19.284 "params": { 00:20:19.284 "discovery_filter": "match_any", 00:20:19.284 "admin_cmd_passthru": { 00:20:19.284 "identify_ctrlr": false 00:20:19.284 }, 00:20:19.284 "dhchap_digests": [ 00:20:19.284 "sha256", 00:20:19.284 "sha384", 00:20:19.284 "sha512" 00:20:19.284 ], 00:20:19.284 "dhchap_dhgroups": [ 00:20:19.284 "null", 00:20:19.284 "ffdhe2048", 00:20:19.284 "ffdhe3072", 00:20:19.284 "ffdhe4096", 00:20:19.284 "ffdhe6144", 00:20:19.284 "ffdhe8192" 00:20:19.284 ] 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_set_max_subsystems", 00:20:19.284 "params": { 00:20:19.284 "max_subsystems": 1024 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_set_crdt", 00:20:19.284 "params": { 00:20:19.284 "crdt1": 0, 00:20:19.284 "crdt2": 0, 00:20:19.284 "crdt3": 0 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_create_transport", 00:20:19.284 "params": { 00:20:19.284 "trtype": "TCP", 00:20:19.284 "max_queue_depth": 128, 00:20:19.284 "max_io_qpairs_per_ctrlr": 127, 00:20:19.284 "in_capsule_data_size": 4096, 00:20:19.284 "max_io_size": 131072, 00:20:19.284 "io_unit_size": 131072, 00:20:19.284 "max_aq_depth": 128, 00:20:19.284 "num_shared_buffers": 511, 00:20:19.284 "buf_cache_size": 4294967295, 00:20:19.284 "dif_insert_or_strip": false, 00:20:19.284 "zcopy": false, 00:20:19.284 "c2h_success": false, 00:20:19.284 "sock_priority": 0, 00:20:19.284 "abort_timeout_sec": 1, 00:20:19.284 "ack_timeout": 0, 00:20:19.284 "data_wr_pool_size": 0 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_create_subsystem", 00:20:19.284 "params": { 00:20:19.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.284 "allow_any_host": false, 00:20:19.284 "serial_number": "00000000000000000000", 00:20:19.284 "model_number": "SPDK bdev Controller", 00:20:19.284 "max_namespaces": 32, 00:20:19.284 "min_cntlid": 1, 00:20:19.284 "max_cntlid": 65519, 00:20:19.284 "ana_reporting": false 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_subsystem_add_host", 00:20:19.284 "params": { 00:20:19.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.284 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.284 "psk": "key0" 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_subsystem_add_ns", 00:20:19.284 "params": { 00:20:19.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.284 "namespace": { 00:20:19.284 "nsid": 1, 00:20:19.284 "bdev_name": "malloc0", 00:20:19.284 "nguid": "44AC77345D244089A7E037280B1845AE", 00:20:19.284 "uuid": "44ac7734-5d24-4089-a7e0-37280b1845ae", 00:20:19.284 "no_auto_visible": false 00:20:19.284 } 00:20:19.284 } 00:20:19.284 }, 00:20:19.284 { 00:20:19.284 "method": "nvmf_subsystem_add_listener", 00:20:19.284 "params": { 00:20:19.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.284 "listen_address": { 00:20:19.284 "trtype": "TCP", 00:20:19.284 "adrfam": "IPv4", 00:20:19.284 "traddr": "10.0.0.2", 00:20:19.284 "trsvcid": "4420" 00:20:19.284 }, 00:20:19.284 "secure_channel": false, 00:20:19.284 "sock_impl": "ssl" 00:20:19.284 } 00:20:19.284 } 00:20:19.284 ] 00:20:19.284 } 00:20:19.284 ] 00:20:19.284 }' 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=568517 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 568517 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 568517 ']' 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.284 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.285 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.285 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.544 [2024-10-14 16:45:23.934384] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:19.544 [2024-10-14 16:45:23.934432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.544 [2024-10-14 16:45:24.005295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.544 [2024-10-14 16:45:24.040394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.544 [2024-10-14 16:45:24.040428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.544 [2024-10-14 16:45:24.040435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.544 [2024-10-14 16:45:24.040442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.544 [2024-10-14 16:45:24.040447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.544 [2024-10-14 16:45:24.041056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.803 [2024-10-14 16:45:24.253415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.803 [2024-10-14 16:45:24.285443] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.803 [2024-10-14 16:45:24.285656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=568763 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 568763 /var/tmp/bdevperf.sock 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 568763 ']' 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.372 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:20.372 "subsystems": [ 00:20:20.372 { 00:20:20.372 "subsystem": "keyring", 00:20:20.372 "config": [ 00:20:20.372 { 00:20:20.372 "method": "keyring_file_add_key", 00:20:20.372 "params": { 00:20:20.372 "name": "key0", 00:20:20.372 "path": "/tmp/tmp.5NrG24K1Hc" 00:20:20.372 } 00:20:20.372 } 00:20:20.372 ] 00:20:20.372 }, 00:20:20.372 { 00:20:20.372 "subsystem": "iobuf", 00:20:20.372 "config": [ 00:20:20.372 { 00:20:20.372 "method": "iobuf_set_options", 00:20:20.372 "params": { 00:20:20.372 "small_pool_count": 8192, 00:20:20.372 "large_pool_count": 1024, 00:20:20.372 "small_bufsize": 8192, 00:20:20.372 "large_bufsize": 135168 00:20:20.372 } 00:20:20.372 } 00:20:20.372 ] 00:20:20.372 }, 00:20:20.372 { 00:20:20.372 "subsystem": "sock", 00:20:20.372 "config": [ 00:20:20.372 { 00:20:20.372 "method": "sock_set_default_impl", 00:20:20.372 "params": { 00:20:20.372 "impl_name": "posix" 00:20:20.372 } 00:20:20.372 }, 00:20:20.372 { 00:20:20.372 "method": "sock_impl_set_options", 00:20:20.372 "params": { 00:20:20.372 "impl_name": "ssl", 00:20:20.372 "recv_buf_size": 4096, 00:20:20.372 "send_buf_size": 4096, 00:20:20.372 "enable_recv_pipe": true, 00:20:20.372 "enable_quickack": false, 00:20:20.372 "enable_placement_id": 0, 00:20:20.373 "enable_zerocopy_send_server": true, 00:20:20.373 "enable_zerocopy_send_client": false, 00:20:20.373 "zerocopy_threshold": 0, 00:20:20.373 "tls_version": 0, 00:20:20.373 "enable_ktls": false 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "sock_impl_set_options", 00:20:20.373 "params": { 00:20:20.373 "impl_name": "posix", 00:20:20.373 "recv_buf_size": 2097152, 00:20:20.373 "send_buf_size": 2097152, 00:20:20.373 "enable_recv_pipe": true, 00:20:20.373 "enable_quickack": false, 00:20:20.373 "enable_placement_id": 0, 00:20:20.373 "enable_zerocopy_send_server": true, 00:20:20.373 "enable_zerocopy_send_client": false, 00:20:20.373 "zerocopy_threshold": 0, 00:20:20.373 "tls_version": 0, 00:20:20.373 "enable_ktls": false 00:20:20.373 } 00:20:20.373 } 00:20:20.373 ] 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "subsystem": "vmd", 00:20:20.373 "config": [] 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "subsystem": "accel", 00:20:20.373 "config": [ 00:20:20.373 { 00:20:20.373 "method": "accel_set_options", 00:20:20.373 "params": { 00:20:20.373 "small_cache_size": 128, 00:20:20.373 "large_cache_size": 16, 00:20:20.373 "task_count": 2048, 00:20:20.373 "sequence_count": 2048, 00:20:20.373 "buf_count": 2048 00:20:20.373 } 00:20:20.373 } 00:20:20.373 ] 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "subsystem": "bdev", 00:20:20.373 "config": [ 00:20:20.373 { 00:20:20.373 "method": "bdev_set_options", 00:20:20.373 "params": { 00:20:20.373 "bdev_io_pool_size": 65535, 00:20:20.373 "bdev_io_cache_size": 256, 00:20:20.373 "bdev_auto_examine": true, 00:20:20.373 "iobuf_small_cache_size": 128, 00:20:20.373 "iobuf_large_cache_size": 16 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_raid_set_options", 00:20:20.373 "params": { 00:20:20.373 "process_window_size_kb": 1024, 00:20:20.373 "process_max_bandwidth_mb_sec": 0 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_iscsi_set_options", 00:20:20.373 "params": { 00:20:20.373 "timeout_sec": 30 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_nvme_set_options", 00:20:20.373 "params": { 00:20:20.373 "action_on_timeout": "none", 00:20:20.373 "timeout_us": 0, 00:20:20.373 "timeout_admin_us": 0, 00:20:20.373 "keep_alive_timeout_ms": 10000, 00:20:20.373 "arbitration_burst": 0, 00:20:20.373 "low_priority_weight": 0, 00:20:20.373 "medium_priority_weight": 0, 00:20:20.373 "high_priority_weight": 0, 00:20:20.373 "nvme_adminq_poll_period_us": 10000, 00:20:20.373 "nvme_ioq_poll_period_us": 0, 00:20:20.373 "io_queue_requests": 512, 00:20:20.373 "delay_cmd_submit": true, 00:20:20.373 "transport_retry_count": 4, 00:20:20.373 "bdev_retry_count": 3, 00:20:20.373 "transport_ack_timeout": 0, 00:20:20.373 "ctrlr_loss_timeout_sec": 0, 00:20:20.373 "reconnect_delay_sec": 0, 00:20:20.373 "fast_io_fail_timeout_sec": 0, 00:20:20.373 "disable_auto_failback": false, 00:20:20.373 "generate_uuids": false, 00:20:20.373 "transport_tos": 0, 00:20:20.373 "nvme_error_stat": false, 00:20:20.373 "rdma_srq_size": 0, 00:20:20.373 "io_path_stat": false, 00:20:20.373 "allow_accel_sequence": false, 00:20:20.373 "rdma_max_cq_size": 0, 00:20:20.373 "rdma_cm_event_timeout_ms": 0, 00:20:20.373 "dhchap_digests": [ 00:20:20.373 "sha256", 00:20:20.373 "sha384", 00:20:20.373 "sha512" 00:20:20.373 ], 00:20:20.373 "dhchap_dhgroups": [ 00:20:20.373 "null", 00:20:20.373 "ffdhe2048", 00:20:20.373 "ffdhe3072", 00:20:20.373 "ffdhe4096", 00:20:20.373 "ffdhe6144", 00:20:20.373 "ffdhe8192" 00:20:20.373 ] 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_nvme_attach_controller", 00:20:20.373 "params": { 00:20:20.373 "name": "nvme0", 00:20:20.373 "trtype": "TCP", 00:20:20.373 "adrfam": "IPv4", 00:20:20.373 "traddr": "10.0.0.2", 00:20:20.373 "trsvcid": "4420", 00:20:20.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.373 "prchk_reftag": false, 00:20:20.373 "prchk_guard": false, 00:20:20.373 "ctrlr_loss_timeout_sec": 0, 00:20:20.373 "reconnect_delay_sec": 0, 00:20:20.373 "fast_io_fail_timeout_sec": 0, 00:20:20.373 "psk": "key0", 00:20:20.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.373 "hdgst": false, 00:20:20.373 "ddgst": false, 00:20:20.373 "multipath": "multipath" 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_nvme_set_hotplug", 00:20:20.373 "params": { 00:20:20.373 "period_us": 100000, 00:20:20.373 "enable": false 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_enable_histogram", 00:20:20.373 "params": { 00:20:20.373 "name": "nvme0n1", 00:20:20.373 "enable": true 00:20:20.373 } 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "method": "bdev_wait_for_examine" 00:20:20.373 } 00:20:20.373 ] 00:20:20.373 }, 00:20:20.373 { 00:20:20.373 "subsystem": "nbd", 00:20:20.373 "config": [] 00:20:20.373 } 00:20:20.373 ] 00:20:20.373 }' 00:20:20.373 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.373 16:45:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.373 [2024-10-14 16:45:24.838338] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:20.373 [2024-10-14 16:45:24.838382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568763 ] 00:20:20.373 [2024-10-14 16:45:24.887795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.373 [2024-10-14 16:45:24.930918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.645 [2024-10-14 16:45:25.083521] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.213 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.213 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:21.213 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:21.213 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:21.471 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.471 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.471 Running I/O for 1 seconds... 00:20:22.408 5503.00 IOPS, 21.50 MiB/s 00:20:22.408 Latency(us) 00:20:22.408 [2024-10-14T14:45:27.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.408 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:22.408 Verification LBA range: start 0x0 length 0x2000 00:20:22.408 nvme0n1 : 1.02 5545.82 21.66 0.00 0.00 22905.30 5024.43 24217.11 00:20:22.408 [2024-10-14T14:45:27.042Z] =================================================================================================================== 00:20:22.408 [2024-10-14T14:45:27.042Z] Total : 5545.82 21.66 0.00 0.00 22905.30 5024.43 24217.11 00:20:22.408 { 00:20:22.408 "results": [ 00:20:22.408 { 00:20:22.408 "job": "nvme0n1", 00:20:22.408 "core_mask": "0x2", 00:20:22.408 "workload": "verify", 00:20:22.408 "status": "finished", 00:20:22.408 "verify_range": { 00:20:22.408 "start": 0, 00:20:22.408 "length": 8192 00:20:22.408 }, 00:20:22.408 "queue_depth": 128, 00:20:22.408 "io_size": 4096, 00:20:22.408 "runtime": 1.01536, 00:20:22.408 "iops": 5545.816262212417, 00:20:22.408 "mibps": 21.663344774267255, 00:20:22.408 "io_failed": 0, 00:20:22.408 "io_timeout": 0, 00:20:22.408 "avg_latency_us": 22905.30335641982, 00:20:22.408 "min_latency_us": 5024.426666666666, 00:20:22.408 "max_latency_us": 24217.11238095238 00:20:22.408 } 00:20:22.408 ], 00:20:22.408 "core_count": 1 00:20:22.408 } 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:22.408 16:45:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:22.408 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:22.408 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:22.408 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:22.408 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:22.408 nvmf_trace.0 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 568763 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 568763 ']' 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 568763 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568763 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568763' 00:20:22.667 killing process with pid 568763 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 568763 00:20:22.667 Received shutdown signal, test time was about 1.000000 seconds 00:20:22.667 00:20:22.667 Latency(us) 00:20:22.667 [2024-10-14T14:45:27.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.667 [2024-10-14T14:45:27.301Z] =================================================================================================================== 00:20:22.667 [2024-10-14T14:45:27.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 568763 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:22.667 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:22.926 rmmod nvme_tcp 00:20:22.926 rmmod nvme_fabrics 00:20:22.926 rmmod nvme_keyring 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 568517 ']' 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 568517 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 568517 ']' 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 568517 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 568517 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 568517' 00:20:22.926 killing process with pid 568517 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 568517 00:20:22.926 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 568517 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.185 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FsxdiK5evm /tmp/tmp.fCvrWKHn6S /tmp/tmp.5NrG24K1Hc 00:20:25.117 00:20:25.117 real 1m19.157s 00:20:25.117 user 2m0.122s 00:20:25.117 sys 0m31.483s 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.117 ************************************ 00:20:25.117 END TEST nvmf_tls 00:20:25.117 ************************************ 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:25.117 16:45:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.117 ************************************ 00:20:25.117 START TEST nvmf_fips 00:20:25.117 ************************************ 00:20:25.118 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:25.377 * Looking for test storage... 00:20:25.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.377 --rc genhtml_branch_coverage=1 00:20:25.377 --rc genhtml_function_coverage=1 00:20:25.377 --rc genhtml_legend=1 00:20:25.377 --rc geninfo_all_blocks=1 00:20:25.377 --rc geninfo_unexecuted_blocks=1 00:20:25.377 00:20:25.377 ' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.377 --rc genhtml_branch_coverage=1 00:20:25.377 --rc genhtml_function_coverage=1 00:20:25.377 --rc genhtml_legend=1 00:20:25.377 --rc geninfo_all_blocks=1 00:20:25.377 --rc geninfo_unexecuted_blocks=1 00:20:25.377 00:20:25.377 ' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.377 --rc genhtml_branch_coverage=1 00:20:25.377 --rc genhtml_function_coverage=1 00:20:25.377 --rc genhtml_legend=1 00:20:25.377 --rc geninfo_all_blocks=1 00:20:25.377 --rc geninfo_unexecuted_blocks=1 00:20:25.377 00:20:25.377 ' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:25.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.377 --rc genhtml_branch_coverage=1 00:20:25.377 --rc genhtml_function_coverage=1 00:20:25.377 --rc genhtml_legend=1 00:20:25.377 --rc geninfo_all_blocks=1 00:20:25.377 --rc geninfo_unexecuted_blocks=1 00:20:25.377 00:20:25.377 ' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.377 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:25.378 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:25.378 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:25.378 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:25.378 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:25.378 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:25.378 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:25.637 Error setting digest 00:20:25.637 40D22A7DF27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:25.637 40D22A7DF27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.637 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:32.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:32.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:32.205 Found net devices under 0000:86:00.0: cvl_0_0 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.205 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:32.206 Found net devices under 0000:86:00.1: cvl_0_1 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:32.206 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:32.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:20:32.206 00:20:32.206 --- 10.0.0.2 ping statistics --- 00:20:32.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.206 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:20:32.206 00:20:32.206 --- 10.0.0.1 ping statistics --- 00:20:32.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.206 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=572778 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 572778 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 572778 ']' 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.206 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.206 [2024-10-14 16:45:36.156687] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:32.206 [2024-10-14 16:45:36.156739] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.206 [2024-10-14 16:45:36.229850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.206 [2024-10-14 16:45:36.270467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.206 [2024-10-14 16:45:36.270502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.206 [2024-10-14 16:45:36.270509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.206 [2024-10-14 16:45:36.270514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.206 [2024-10-14 16:45:36.270519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.206 [2024-10-14 16:45:36.271082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:32.464 16:45:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:32.464 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.nTD 00:20:32.465 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:32.465 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.nTD 00:20:32.465 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.nTD 00:20:32.465 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.nTD 00:20:32.465 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:32.723 [2024-10-14 16:45:37.179494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.723 [2024-10-14 16:45:37.195496] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.723 [2024-10-14 16:45:37.195687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.723 malloc0 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=572914 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 572914 /var/tmp/bdevperf.sock 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 572914 ']' 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.723 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:32.723 [2024-10-14 16:45:37.308961] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:32.723 [2024-10-14 16:45:37.309008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid572914 ] 00:20:32.723 [2024-10-14 16:45:37.358285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.981 [2024-10-14 16:45:37.400235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.981 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.981 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:32.981 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.nTD 00:20:33.240 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:33.240 [2024-10-14 16:45:37.826052] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.498 TLSTESTn1 00:20:33.498 16:45:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:33.498 Running I/O for 10 seconds... 00:20:35.369 5303.00 IOPS, 20.71 MiB/s [2024-10-14T14:45:41.380Z] 5395.50 IOPS, 21.08 MiB/s [2024-10-14T14:45:42.315Z] 5460.67 IOPS, 21.33 MiB/s [2024-10-14T14:45:43.249Z] 5513.75 IOPS, 21.54 MiB/s [2024-10-14T14:45:44.184Z] 5539.20 IOPS, 21.64 MiB/s [2024-10-14T14:45:45.119Z] 5539.50 IOPS, 21.64 MiB/s [2024-10-14T14:45:46.054Z] 5561.86 IOPS, 21.73 MiB/s [2024-10-14T14:45:47.429Z] 5570.25 IOPS, 21.76 MiB/s [2024-10-14T14:45:48.366Z] 5583.44 IOPS, 21.81 MiB/s [2024-10-14T14:45:48.366Z] 5585.30 IOPS, 21.82 MiB/s 00:20:43.732 Latency(us) 00:20:43.732 [2024-10-14T14:45:48.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.732 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:43.732 Verification LBA range: start 0x0 length 0x2000 00:20:43.732 TLSTESTn1 : 10.01 5589.76 21.83 0.00 0.00 22865.82 5960.66 23218.47 00:20:43.732 [2024-10-14T14:45:48.366Z] =================================================================================================================== 00:20:43.732 [2024-10-14T14:45:48.366Z] Total : 5589.76 21.83 0.00 0.00 22865.82 5960.66 23218.47 00:20:43.732 { 00:20:43.732 "results": [ 00:20:43.732 { 00:20:43.732 "job": "TLSTESTn1", 00:20:43.732 "core_mask": "0x4", 00:20:43.732 "workload": "verify", 00:20:43.732 "status": "finished", 00:20:43.732 "verify_range": { 00:20:43.732 "start": 0, 00:20:43.732 "length": 8192 00:20:43.732 }, 00:20:43.732 "queue_depth": 128, 00:20:43.732 "io_size": 4096, 00:20:43.732 "runtime": 10.014923, 00:20:43.732 "iops": 5589.758403534405, 00:20:43.732 "mibps": 21.83499376380627, 00:20:43.732 "io_failed": 0, 00:20:43.732 "io_timeout": 0, 00:20:43.732 "avg_latency_us": 22865.8248733031, 00:20:43.732 "min_latency_us": 5960.655238095238, 00:20:43.732 "max_latency_us": 23218.46857142857 00:20:43.732 } 00:20:43.732 ], 00:20:43.732 "core_count": 1 00:20:43.732 } 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:43.732 nvmf_trace.0 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 572914 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 572914 ']' 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 572914 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 572914 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 572914' 00:20:43.732 killing process with pid 572914 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 572914 00:20:43.732 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.732 00:20:43.732 Latency(us) 00:20:43.732 [2024-10-14T14:45:48.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.732 [2024-10-14T14:45:48.366Z] =================================================================================================================== 00:20:43.732 [2024-10-14T14:45:48.366Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 572914 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.732 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.732 rmmod nvme_tcp 00:20:43.991 rmmod nvme_fabrics 00:20:43.991 rmmod nvme_keyring 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 572778 ']' 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 572778 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 572778 ']' 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 572778 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 572778 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 572778' 00:20:43.991 killing process with pid 572778 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 572778 00:20:43.991 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 572778 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.250 16:45:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.nTD 00:20:46.156 00:20:46.156 real 0m20.984s 00:20:46.156 user 0m21.804s 00:20:46.156 sys 0m9.653s 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:46.156 ************************************ 00:20:46.156 END TEST nvmf_fips 00:20:46.156 ************************************ 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.156 ************************************ 00:20:46.156 START TEST nvmf_control_msg_list 00:20:46.156 ************************************ 00:20:46.156 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:46.416 * Looking for test storage... 00:20:46.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:46.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.416 --rc genhtml_branch_coverage=1 00:20:46.416 --rc genhtml_function_coverage=1 00:20:46.416 --rc genhtml_legend=1 00:20:46.416 --rc geninfo_all_blocks=1 00:20:46.416 --rc geninfo_unexecuted_blocks=1 00:20:46.416 00:20:46.416 ' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:46.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.416 --rc genhtml_branch_coverage=1 00:20:46.416 --rc genhtml_function_coverage=1 00:20:46.416 --rc genhtml_legend=1 00:20:46.416 --rc geninfo_all_blocks=1 00:20:46.416 --rc geninfo_unexecuted_blocks=1 00:20:46.416 00:20:46.416 ' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:46.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.416 --rc genhtml_branch_coverage=1 00:20:46.416 --rc genhtml_function_coverage=1 00:20:46.416 --rc genhtml_legend=1 00:20:46.416 --rc geninfo_all_blocks=1 00:20:46.416 --rc geninfo_unexecuted_blocks=1 00:20:46.416 00:20:46.416 ' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:46.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.416 --rc genhtml_branch_coverage=1 00:20:46.416 --rc genhtml_function_coverage=1 00:20:46.416 --rc genhtml_legend=1 00:20:46.416 --rc geninfo_all_blocks=1 00:20:46.416 --rc geninfo_unexecuted_blocks=1 00:20:46.416 00:20:46.416 ' 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:46.416 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.417 16:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:52.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.990 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:52.991 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:52.991 Found net devices under 0000:86:00.0: cvl_0_0 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:52.991 Found net devices under 0000:86:00.1: cvl_0_1 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:20:52.991 00:20:52.991 --- 10.0.0.2 ping statistics --- 00:20:52.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.991 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:20:52.991 00:20:52.991 --- 10.0.0.1 ping statistics --- 00:20:52.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.991 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=578181 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 578181 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 578181 ']' 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:52.991 16:45:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 [2024-10-14 16:45:57.021812] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:20:52.991 [2024-10-14 16:45:57.021853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.991 [2024-10-14 16:45:57.091648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.991 [2024-10-14 16:45:57.132192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.991 [2024-10-14 16:45:57.132226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.991 [2024-10-14 16:45:57.132233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.991 [2024-10-14 16:45:57.132239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.991 [2024-10-14 16:45:57.132245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.991 [2024-10-14 16:45:57.132789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:52.991 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 [2024-10-14 16:45:57.263314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 Malloc0 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:52.992 [2024-10-14 16:45:57.303536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=578278 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=578280 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=578282 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 578278 00:20:52.992 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.992 [2024-10-14 16:45:57.382247] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:52.992 [2024-10-14 16:45:57.382447] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:52.992 [2024-10-14 16:45:57.382615] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:53.929 Initializing NVMe Controllers 00:20:53.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:53.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:53.929 Initialization complete. Launching workers. 00:20:53.929 ======================================================== 00:20:53.929 Latency(us) 00:20:53.929 Device Information : IOPS MiB/s Average min max 00:20:53.929 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41022.25 40833.71 41907.95 00:20:53.929 ======================================================== 00:20:53.929 Total : 25.00 0.10 41022.25 40833.71 41907.95 00:20:53.929 00:20:53.929 Initializing NVMe Controllers 00:20:53.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:53.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:53.929 Initialization complete. Launching workers. 00:20:53.929 ======================================================== 00:20:53.929 Latency(us) 00:20:53.929 Device Information : IOPS MiB/s Average min max 00:20:53.929 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6522.00 25.48 152.99 129.07 324.44 00:20:53.929 ======================================================== 00:20:53.929 Total : 6522.00 25.48 152.99 129.07 324.44 00:20:53.929 00:20:53.929 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 578280 00:20:53.929 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 578282 00:20:54.188 Initializing NVMe Controllers 00:20:54.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:54.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:54.188 Initialization complete. Launching workers. 00:20:54.188 ======================================================== 00:20:54.188 Latency(us) 00:20:54.188 Device Information : IOPS MiB/s Average min max 00:20:54.188 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6485.00 25.33 153.85 131.58 364.28 00:20:54.188 ======================================================== 00:20:54.188 Total : 6485.00 25.33 153.85 131.58 364.28 00:20:54.188 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.188 rmmod nvme_tcp 00:20:54.188 rmmod nvme_fabrics 00:20:54.188 rmmod nvme_keyring 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 578181 ']' 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 578181 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 578181 ']' 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 578181 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 578181 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 578181' 00:20:54.188 killing process with pid 578181 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 578181 00:20:54.188 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 578181 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.448 16:45:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:56.354 00:20:56.354 real 0m10.144s 00:20:56.354 user 0m6.571s 00:20:56.354 sys 0m5.475s 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:56.354 ************************************ 00:20:56.354 END TEST nvmf_control_msg_list 00:20:56.354 ************************************ 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:56.354 16:46:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.614 ************************************ 00:20:56.614 START TEST nvmf_wait_for_buf 00:20:56.614 ************************************ 00:20:56.614 16:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:56.614 * Looking for test storage... 00:20:56.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:56.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.614 --rc genhtml_branch_coverage=1 00:20:56.614 --rc genhtml_function_coverage=1 00:20:56.614 --rc genhtml_legend=1 00:20:56.614 --rc geninfo_all_blocks=1 00:20:56.614 --rc geninfo_unexecuted_blocks=1 00:20:56.614 00:20:56.614 ' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:56.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.614 --rc genhtml_branch_coverage=1 00:20:56.614 --rc genhtml_function_coverage=1 00:20:56.614 --rc genhtml_legend=1 00:20:56.614 --rc geninfo_all_blocks=1 00:20:56.614 --rc geninfo_unexecuted_blocks=1 00:20:56.614 00:20:56.614 ' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:56.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.614 --rc genhtml_branch_coverage=1 00:20:56.614 --rc genhtml_function_coverage=1 00:20:56.614 --rc genhtml_legend=1 00:20:56.614 --rc geninfo_all_blocks=1 00:20:56.614 --rc geninfo_unexecuted_blocks=1 00:20:56.614 00:20:56.614 ' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:56.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.614 --rc genhtml_branch_coverage=1 00:20:56.614 --rc genhtml_function_coverage=1 00:20:56.614 --rc genhtml_legend=1 00:20:56.614 --rc geninfo_all_blocks=1 00:20:56.614 --rc geninfo_unexecuted_blocks=1 00:20:56.614 00:20:56.614 ' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:56.614 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:56.615 16:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.188 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:03.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:03.189 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:03.189 Found net devices under 0000:86:00.0: cvl_0_0 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:03.189 Found net devices under 0000:86:00.1: cvl_0_1 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.189 16:46:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:21:03.189 00:21:03.189 --- 10.0.0.2 ping statistics --- 00:21:03.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.189 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:21:03.189 00:21:03.189 --- 10.0.0.1 ping statistics --- 00:21:03.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.189 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=581958 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 581958 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 581958 ']' 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.189 [2024-10-14 16:46:07.245353] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:21:03.189 [2024-10-14 16:46:07.245406] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.189 [2024-10-14 16:46:07.316656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.189 [2024-10-14 16:46:07.357436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.189 [2024-10-14 16:46:07.357471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.189 [2024-10-14 16:46:07.357478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.189 [2024-10-14 16:46:07.357483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.189 [2024-10-14 16:46:07.357488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.189 [2024-10-14 16:46:07.358054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.189 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 Malloc0 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 [2024-10-14 16:46:07.523822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.190 [2024-10-14 16:46:07.552000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.190 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.190 [2024-10-14 16:46:07.621678] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:04.567 Initializing NVMe Controllers 00:21:04.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:04.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:04.567 Initialization complete. Launching workers. 00:21:04.567 ======================================================== 00:21:04.567 Latency(us) 00:21:04.567 Device Information : IOPS MiB/s Average min max 00:21:04.567 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32269.99 7263.61 63842.08 00:21:04.567 ======================================================== 00:21:04.567 Total : 129.00 16.12 32269.99 7263.61 63842.08 00:21:04.567 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.567 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.567 rmmod nvme_tcp 00:21:04.567 rmmod nvme_fabrics 00:21:04.567 rmmod nvme_keyring 00:21:04.826 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 581958 ']' 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 581958 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 581958 ']' 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 581958 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 581958 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 581958' 00:21:04.827 killing process with pid 581958 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 581958 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 581958 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.827 16:46:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.362 00:21:07.362 real 0m10.503s 00:21:07.362 user 0m3.993s 00:21:07.362 sys 0m4.938s 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:07.362 ************************************ 00:21:07.362 END TEST nvmf_wait_for_buf 00:21:07.362 ************************************ 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.362 16:46:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:12.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:12.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:12.634 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:12.635 Found net devices under 0000:86:00.0: cvl_0_0 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:12.635 Found net devices under 0000:86:00.1: cvl_0_1 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:12.635 ************************************ 00:21:12.635 START TEST nvmf_perf_adq 00:21:12.635 ************************************ 00:21:12.635 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:12.895 * Looking for test storage... 00:21:12.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.895 --rc genhtml_branch_coverage=1 00:21:12.895 --rc genhtml_function_coverage=1 00:21:12.895 --rc genhtml_legend=1 00:21:12.895 --rc geninfo_all_blocks=1 00:21:12.895 --rc geninfo_unexecuted_blocks=1 00:21:12.895 00:21:12.895 ' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.895 --rc genhtml_branch_coverage=1 00:21:12.895 --rc genhtml_function_coverage=1 00:21:12.895 --rc genhtml_legend=1 00:21:12.895 --rc geninfo_all_blocks=1 00:21:12.895 --rc geninfo_unexecuted_blocks=1 00:21:12.895 00:21:12.895 ' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.895 --rc genhtml_branch_coverage=1 00:21:12.895 --rc genhtml_function_coverage=1 00:21:12.895 --rc genhtml_legend=1 00:21:12.895 --rc geninfo_all_blocks=1 00:21:12.895 --rc geninfo_unexecuted_blocks=1 00:21:12.895 00:21:12.895 ' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.895 --rc genhtml_branch_coverage=1 00:21:12.895 --rc genhtml_function_coverage=1 00:21:12.895 --rc genhtml_legend=1 00:21:12.895 --rc geninfo_all_blocks=1 00:21:12.895 --rc geninfo_unexecuted_blocks=1 00:21:12.895 00:21:12.895 ' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.895 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:12.896 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:19.599 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:19.599 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.599 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:19.600 Found net devices under 0000:86:00.0: cvl_0_0 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:19.600 Found net devices under 0000:86:00.1: cvl_0_1 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:19.600 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:19.600 16:46:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:22.136 16:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.407 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.408 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.408 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:21:27.408 00:21:27.408 --- 10.0.0.2 ping statistics --- 00:21:27.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.408 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:27.408 00:21:27.408 --- 10.0.0.1 ping statistics --- 00:21:27.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.408 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.408 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=590303 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 590303 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 590303 ']' 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 [2024-10-14 16:46:31.618120] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:21:27.409 [2024-10-14 16:46:31.618169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.409 [2024-10-14 16:46:31.693059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.409 [2024-10-14 16:46:31.736677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.409 [2024-10-14 16:46:31.736713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.409 [2024-10-14 16:46:31.736720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.409 [2024-10-14 16:46:31.736727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.409 [2024-10-14 16:46:31.736731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.409 [2024-10-14 16:46:31.738321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.409 [2024-10-14 16:46:31.738439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.409 [2024-10-14 16:46:31.738551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.409 [2024-10-14 16:46:31.738551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 [2024-10-14 16:46:31.943904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 Malloc1 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.409 [2024-10-14 16:46:32.007246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=590530 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:27.409 16:46:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:29.943 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:29.943 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.943 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.943 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.943 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:29.943 "tick_rate": 2100000000, 00:21:29.943 "poll_groups": [ 00:21:29.943 { 00:21:29.943 "name": "nvmf_tgt_poll_group_000", 00:21:29.943 "admin_qpairs": 1, 00:21:29.943 "io_qpairs": 1, 00:21:29.943 "current_admin_qpairs": 1, 00:21:29.943 "current_io_qpairs": 1, 00:21:29.943 "pending_bdev_io": 0, 00:21:29.943 "completed_nvme_io": 19430, 00:21:29.943 "transports": [ 00:21:29.943 { 00:21:29.943 "trtype": "TCP" 00:21:29.943 } 00:21:29.943 ] 00:21:29.943 }, 00:21:29.943 { 00:21:29.943 "name": "nvmf_tgt_poll_group_001", 00:21:29.944 "admin_qpairs": 0, 00:21:29.944 "io_qpairs": 1, 00:21:29.944 "current_admin_qpairs": 0, 00:21:29.944 "current_io_qpairs": 1, 00:21:29.944 "pending_bdev_io": 0, 00:21:29.944 "completed_nvme_io": 19923, 00:21:29.944 "transports": [ 00:21:29.944 { 00:21:29.944 "trtype": "TCP" 00:21:29.944 } 00:21:29.944 ] 00:21:29.944 }, 00:21:29.944 { 00:21:29.944 "name": "nvmf_tgt_poll_group_002", 00:21:29.944 "admin_qpairs": 0, 00:21:29.944 "io_qpairs": 1, 00:21:29.944 "current_admin_qpairs": 0, 00:21:29.944 "current_io_qpairs": 1, 00:21:29.944 "pending_bdev_io": 0, 00:21:29.944 "completed_nvme_io": 19592, 00:21:29.944 "transports": [ 00:21:29.944 { 00:21:29.944 "trtype": "TCP" 00:21:29.944 } 00:21:29.944 ] 00:21:29.944 }, 00:21:29.944 { 00:21:29.944 "name": "nvmf_tgt_poll_group_003", 00:21:29.944 "admin_qpairs": 0, 00:21:29.944 "io_qpairs": 1, 00:21:29.944 "current_admin_qpairs": 0, 00:21:29.944 "current_io_qpairs": 1, 00:21:29.944 "pending_bdev_io": 0, 00:21:29.944 "completed_nvme_io": 19750, 00:21:29.944 "transports": [ 00:21:29.944 { 00:21:29.944 "trtype": "TCP" 00:21:29.944 } 00:21:29.944 ] 00:21:29.944 } 00:21:29.944 ] 00:21:29.944 }' 00:21:29.944 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:29.944 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:29.944 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:29.944 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:29.944 16:46:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 590530 00:21:38.059 Initializing NVMe Controllers 00:21:38.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:38.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:38.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:38.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:38.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:38.059 Initialization complete. Launching workers. 00:21:38.059 ======================================================== 00:21:38.059 Latency(us) 00:21:38.059 Device Information : IOPS MiB/s Average min max 00:21:38.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10295.68 40.22 6215.40 2198.36 10668.04 00:21:38.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10556.78 41.24 6061.32 2285.80 10915.16 00:21:38.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10305.18 40.25 6211.35 1968.04 10303.24 00:21:38.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10325.78 40.34 6198.97 2463.25 10638.53 00:21:38.060 ======================================================== 00:21:38.060 Total : 41483.43 162.04 6171.09 1968.04 10915.16 00:21:38.060 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.060 rmmod nvme_tcp 00:21:38.060 rmmod nvme_fabrics 00:21:38.060 rmmod nvme_keyring 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 590303 ']' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 590303 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 590303 ']' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 590303 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 590303 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 590303' 00:21:38.060 killing process with pid 590303 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 590303 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 590303 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.060 16:46:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.967 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.967 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:39.967 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:39.967 16:46:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:41.344 16:46:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:43.249 16:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:48.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:48.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:48.523 Found net devices under 0000:86:00.0: cvl_0_0 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:48.523 Found net devices under 0000:86:00.1: cvl_0_1 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.523 16:46:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.523 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.523 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.523 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:21:48.524 00:21:48.524 --- 10.0.0.2 ping statistics --- 00:21:48.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.524 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:21:48.524 00:21:48.524 --- 10.0.0.1 ping statistics --- 00:21:48.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.524 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:48.524 net.core.busy_poll = 1 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:48.524 net.core.busy_read = 1 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:48.524 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=594204 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 594204 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 594204 ']' 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.783 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:48.783 [2024-10-14 16:46:53.370821] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:21:48.783 [2024-10-14 16:46:53.370866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.042 [2024-10-14 16:46:53.425255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.042 [2024-10-14 16:46:53.467668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.042 [2024-10-14 16:46:53.467702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.042 [2024-10-14 16:46:53.467709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.042 [2024-10-14 16:46:53.467715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.043 [2024-10-14 16:46:53.467720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.043 [2024-10-14 16:46:53.469273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.043 [2024-10-14 16:46:53.469380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.043 [2024-10-14 16:46:53.469491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.043 [2024-10-14 16:46:53.469492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.043 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.302 [2024-10-14 16:46:53.690685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.302 Malloc1 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.302 [2024-10-14 16:46:53.764538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=594354 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:49.302 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:51.214 "tick_rate": 2100000000, 00:21:51.214 "poll_groups": [ 00:21:51.214 { 00:21:51.214 "name": "nvmf_tgt_poll_group_000", 00:21:51.214 "admin_qpairs": 1, 00:21:51.214 "io_qpairs": 2, 00:21:51.214 "current_admin_qpairs": 1, 00:21:51.214 "current_io_qpairs": 2, 00:21:51.214 "pending_bdev_io": 0, 00:21:51.214 "completed_nvme_io": 28715, 00:21:51.214 "transports": [ 00:21:51.214 { 00:21:51.214 "trtype": "TCP" 00:21:51.214 } 00:21:51.214 ] 00:21:51.214 }, 00:21:51.214 { 00:21:51.214 "name": "nvmf_tgt_poll_group_001", 00:21:51.214 "admin_qpairs": 0, 00:21:51.214 "io_qpairs": 2, 00:21:51.214 "current_admin_qpairs": 0, 00:21:51.214 "current_io_qpairs": 2, 00:21:51.214 "pending_bdev_io": 0, 00:21:51.214 "completed_nvme_io": 28606, 00:21:51.214 "transports": [ 00:21:51.214 { 00:21:51.214 "trtype": "TCP" 00:21:51.214 } 00:21:51.214 ] 00:21:51.214 }, 00:21:51.214 { 00:21:51.214 "name": "nvmf_tgt_poll_group_002", 00:21:51.214 "admin_qpairs": 0, 00:21:51.214 "io_qpairs": 0, 00:21:51.214 "current_admin_qpairs": 0, 00:21:51.214 "current_io_qpairs": 0, 00:21:51.214 "pending_bdev_io": 0, 00:21:51.214 "completed_nvme_io": 0, 00:21:51.214 "transports": [ 00:21:51.214 { 00:21:51.214 "trtype": "TCP" 00:21:51.214 } 00:21:51.214 ] 00:21:51.214 }, 00:21:51.214 { 00:21:51.214 "name": "nvmf_tgt_poll_group_003", 00:21:51.214 "admin_qpairs": 0, 00:21:51.214 "io_qpairs": 0, 00:21:51.214 "current_admin_qpairs": 0, 00:21:51.214 "current_io_qpairs": 0, 00:21:51.214 "pending_bdev_io": 0, 00:21:51.214 "completed_nvme_io": 0, 00:21:51.214 "transports": [ 00:21:51.214 { 00:21:51.214 "trtype": "TCP" 00:21:51.214 } 00:21:51.214 ] 00:21:51.214 } 00:21:51.214 ] 00:21:51.214 }' 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:51.214 16:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 594354 00:21:59.336 Initializing NVMe Controllers 00:21:59.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:59.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:59.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:59.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:59.336 Initialization complete. Launching workers. 00:21:59.336 ======================================================== 00:21:59.336 Latency(us) 00:21:59.336 Device Information : IOPS MiB/s Average min max 00:21:59.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7552.40 29.50 8473.83 1470.57 54439.79 00:21:59.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7725.30 30.18 8314.98 1551.42 54174.70 00:21:59.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7289.00 28.47 8779.08 1492.32 53020.56 00:21:59.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7378.20 28.82 8675.79 1220.19 55944.69 00:21:59.336 ======================================================== 00:21:59.336 Total : 29944.90 116.97 8556.91 1220.19 55944.69 00:21:59.336 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.336 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.336 rmmod nvme_tcp 00:21:59.595 rmmod nvme_fabrics 00:21:59.595 rmmod nvme_keyring 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 594204 ']' 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 594204 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 594204 ']' 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 594204 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 594204 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 594204' 00:21:59.595 killing process with pid 594204 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 594204 00:21:59.595 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 594204 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.854 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:03.142 00:22:03.142 real 0m50.145s 00:22:03.142 user 2m44.046s 00:22:03.142 sys 0m10.144s 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.142 ************************************ 00:22:03.142 END TEST nvmf_perf_adq 00:22:03.142 ************************************ 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:03.142 ************************************ 00:22:03.142 START TEST nvmf_shutdown 00:22:03.142 ************************************ 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:03.142 * Looking for test storage... 00:22:03.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:03.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.142 --rc genhtml_branch_coverage=1 00:22:03.142 --rc genhtml_function_coverage=1 00:22:03.142 --rc genhtml_legend=1 00:22:03.142 --rc geninfo_all_blocks=1 00:22:03.142 --rc geninfo_unexecuted_blocks=1 00:22:03.142 00:22:03.142 ' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:03.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.142 --rc genhtml_branch_coverage=1 00:22:03.142 --rc genhtml_function_coverage=1 00:22:03.142 --rc genhtml_legend=1 00:22:03.142 --rc geninfo_all_blocks=1 00:22:03.142 --rc geninfo_unexecuted_blocks=1 00:22:03.142 00:22:03.142 ' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:03.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.142 --rc genhtml_branch_coverage=1 00:22:03.142 --rc genhtml_function_coverage=1 00:22:03.142 --rc genhtml_legend=1 00:22:03.142 --rc geninfo_all_blocks=1 00:22:03.142 --rc geninfo_unexecuted_blocks=1 00:22:03.142 00:22:03.142 ' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:03.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.142 --rc genhtml_branch_coverage=1 00:22:03.142 --rc genhtml_function_coverage=1 00:22:03.142 --rc genhtml_legend=1 00:22:03.142 --rc geninfo_all_blocks=1 00:22:03.142 --rc geninfo_unexecuted_blocks=1 00:22:03.142 00:22:03.142 ' 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.142 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:03.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.143 ************************************ 00:22:03.143 START TEST nvmf_shutdown_tc1 00:22:03.143 ************************************ 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.143 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.712 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.712 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.712 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.712 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:22:09.713 00:22:09.713 --- 10.0.0.2 ping statistics --- 00:22:09.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.713 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:09.713 00:22:09.713 --- 10.0.0.1 ping statistics --- 00:22:09.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.713 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=599802 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 599802 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 599802 ']' 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.713 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 [2024-10-14 16:47:13.720870] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:09.713 [2024-10-14 16:47:13.720918] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.713 [2024-10-14 16:47:13.798207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.713 [2024-10-14 16:47:13.840824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.713 [2024-10-14 16:47:13.840861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.713 [2024-10-14 16:47:13.840868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.713 [2024-10-14 16:47:13.840873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.713 [2024-10-14 16:47:13.840878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.713 [2024-10-14 16:47:13.842328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.713 [2024-10-14 16:47:13.842441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.713 [2024-10-14 16:47:13.842441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.713 [2024-10-14 16:47:13.842348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.972 [2024-10-14 16:47:14.581220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:09.972 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.231 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:10.231 Malloc1 00:22:10.231 [2024-10-14 16:47:14.698678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.231 Malloc2 00:22:10.231 Malloc3 00:22:10.231 Malloc4 00:22:10.231 Malloc5 00:22:10.490 Malloc6 00:22:10.490 Malloc7 00:22:10.490 Malloc8 00:22:10.490 Malloc9 00:22:10.490 Malloc10 00:22:10.490 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.490 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:10.490 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.490 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=600082 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 600082 /var/tmp/bdevperf.sock 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 600082 ']' 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.749 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.749 { 00:22:10.749 "params": { 00:22:10.749 "name": "Nvme$subsystem", 00:22:10.749 "trtype": "$TEST_TRANSPORT", 00:22:10.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.749 "adrfam": "ipv4", 00:22:10.749 "trsvcid": "$NVMF_PORT", 00:22:10.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.749 "hdgst": ${hdgst:-false}, 00:22:10.749 "ddgst": ${ddgst:-false} 00:22:10.749 }, 00:22:10.749 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 [2024-10-14 16:47:15.179839] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:10.750 [2024-10-14 16:47:15.179887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:10.750 { 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme$subsystem", 00:22:10.750 "trtype": "$TEST_TRANSPORT", 00:22:10.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "$NVMF_PORT", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:10.750 "hdgst": ${hdgst:-false}, 00:22:10.750 "ddgst": ${ddgst:-false} 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 } 00:22:10.750 EOF 00:22:10.750 )") 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:10.750 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme1", 00:22:10.750 "trtype": "tcp", 00:22:10.750 "traddr": "10.0.0.2", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "4420", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.750 "hdgst": false, 00:22:10.750 "ddgst": false 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 },{ 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme2", 00:22:10.750 "trtype": "tcp", 00:22:10.750 "traddr": "10.0.0.2", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "4420", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:10.750 "hdgst": false, 00:22:10.750 "ddgst": false 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 },{ 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme3", 00:22:10.750 "trtype": "tcp", 00:22:10.750 "traddr": "10.0.0.2", 00:22:10.750 "adrfam": "ipv4", 00:22:10.750 "trsvcid": "4420", 00:22:10.750 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:10.750 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:10.750 "hdgst": false, 00:22:10.750 "ddgst": false 00:22:10.750 }, 00:22:10.750 "method": "bdev_nvme_attach_controller" 00:22:10.750 },{ 00:22:10.750 "params": { 00:22:10.750 "name": "Nvme4", 00:22:10.750 "trtype": "tcp", 00:22:10.750 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 },{ 00:22:10.751 "params": { 00:22:10.751 "name": "Nvme5", 00:22:10.751 "trtype": "tcp", 00:22:10.751 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 },{ 00:22:10.751 "params": { 00:22:10.751 "name": "Nvme6", 00:22:10.751 "trtype": "tcp", 00:22:10.751 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 },{ 00:22:10.751 "params": { 00:22:10.751 "name": "Nvme7", 00:22:10.751 "trtype": "tcp", 00:22:10.751 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 },{ 00:22:10.751 "params": { 00:22:10.751 "name": "Nvme8", 00:22:10.751 "trtype": "tcp", 00:22:10.751 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 },{ 00:22:10.751 "params": { 00:22:10.751 "name": "Nvme9", 00:22:10.751 "trtype": "tcp", 00:22:10.751 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 },{ 00:22:10.751 "params": { 00:22:10.751 "name": "Nvme10", 00:22:10.751 "trtype": "tcp", 00:22:10.751 "traddr": "10.0.0.2", 00:22:10.751 "adrfam": "ipv4", 00:22:10.751 "trsvcid": "4420", 00:22:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:10.751 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:10.751 "hdgst": false, 00:22:10.751 "ddgst": false 00:22:10.751 }, 00:22:10.751 "method": "bdev_nvme_attach_controller" 00:22:10.751 }' 00:22:10.751 [2024-10-14 16:47:15.250456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.751 [2024-10-14 16:47:15.291176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 600082 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:12.653 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:13.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 600082 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 599802 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.589 { 00:22:13.589 "params": { 00:22:13.589 "name": "Nvme$subsystem", 00:22:13.589 "trtype": "$TEST_TRANSPORT", 00:22:13.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.589 "adrfam": "ipv4", 00:22:13.589 "trsvcid": "$NVMF_PORT", 00:22:13.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.589 "hdgst": ${hdgst:-false}, 00:22:13.589 "ddgst": ${ddgst:-false} 00:22:13.589 }, 00:22:13.589 "method": "bdev_nvme_attach_controller" 00:22:13.589 } 00:22:13.589 EOF 00:22:13.589 )") 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.589 { 00:22:13.589 "params": { 00:22:13.589 "name": "Nvme$subsystem", 00:22:13.589 "trtype": "$TEST_TRANSPORT", 00:22:13.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.589 "adrfam": "ipv4", 00:22:13.589 "trsvcid": "$NVMF_PORT", 00:22:13.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.589 "hdgst": ${hdgst:-false}, 00:22:13.589 "ddgst": ${ddgst:-false} 00:22:13.589 }, 00:22:13.589 "method": "bdev_nvme_attach_controller" 00:22:13.589 } 00:22:13.589 EOF 00:22:13.589 )") 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.589 { 00:22:13.589 "params": { 00:22:13.589 "name": "Nvme$subsystem", 00:22:13.589 "trtype": "$TEST_TRANSPORT", 00:22:13.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.589 "adrfam": "ipv4", 00:22:13.589 "trsvcid": "$NVMF_PORT", 00:22:13.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.589 "hdgst": ${hdgst:-false}, 00:22:13.589 "ddgst": ${ddgst:-false} 00:22:13.589 }, 00:22:13.589 "method": "bdev_nvme_attach_controller" 00:22:13.589 } 00:22:13.589 EOF 00:22:13.589 )") 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.589 { 00:22:13.589 "params": { 00:22:13.589 "name": "Nvme$subsystem", 00:22:13.589 "trtype": "$TEST_TRANSPORT", 00:22:13.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.589 "adrfam": "ipv4", 00:22:13.589 "trsvcid": "$NVMF_PORT", 00:22:13.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.589 "hdgst": ${hdgst:-false}, 00:22:13.589 "ddgst": ${ddgst:-false} 00:22:13.589 }, 00:22:13.589 "method": "bdev_nvme_attach_controller" 00:22:13.589 } 00:22:13.589 EOF 00:22:13.589 )") 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.589 { 00:22:13.589 "params": { 00:22:13.589 "name": "Nvme$subsystem", 00:22:13.589 "trtype": "$TEST_TRANSPORT", 00:22:13.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.589 "adrfam": "ipv4", 00:22:13.589 "trsvcid": "$NVMF_PORT", 00:22:13.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.589 "hdgst": ${hdgst:-false}, 00:22:13.589 "ddgst": ${ddgst:-false} 00:22:13.589 }, 00:22:13.589 "method": "bdev_nvme_attach_controller" 00:22:13.589 } 00:22:13.589 EOF 00:22:13.589 )") 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.589 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.589 { 00:22:13.589 "params": { 00:22:13.590 "name": "Nvme$subsystem", 00:22:13.590 "trtype": "$TEST_TRANSPORT", 00:22:13.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "$NVMF_PORT", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.590 "hdgst": ${hdgst:-false}, 00:22:13.590 "ddgst": ${ddgst:-false} 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 } 00:22:13.590 EOF 00:22:13.590 )") 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.590 { 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme$subsystem", 00:22:13.590 "trtype": "$TEST_TRANSPORT", 00:22:13.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "$NVMF_PORT", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.590 "hdgst": ${hdgst:-false}, 00:22:13.590 "ddgst": ${ddgst:-false} 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 } 00:22:13.590 EOF 00:22:13.590 )") 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.590 [2024-10-14 16:47:18.105012] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:13.590 [2024-10-14 16:47:18.105061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid600574 ] 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.590 { 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme$subsystem", 00:22:13.590 "trtype": "$TEST_TRANSPORT", 00:22:13.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "$NVMF_PORT", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.590 "hdgst": ${hdgst:-false}, 00:22:13.590 "ddgst": ${ddgst:-false} 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 } 00:22:13.590 EOF 00:22:13.590 )") 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.590 { 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme$subsystem", 00:22:13.590 "trtype": "$TEST_TRANSPORT", 00:22:13.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "$NVMF_PORT", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.590 "hdgst": ${hdgst:-false}, 00:22:13.590 "ddgst": ${ddgst:-false} 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 } 00:22:13.590 EOF 00:22:13.590 )") 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:13.590 { 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme$subsystem", 00:22:13.590 "trtype": "$TEST_TRANSPORT", 00:22:13.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "$NVMF_PORT", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:13.590 "hdgst": ${hdgst:-false}, 00:22:13.590 "ddgst": ${ddgst:-false} 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 } 00:22:13.590 EOF 00:22:13.590 )") 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:13.590 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme1", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme2", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme3", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme4", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme5", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme6", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme7", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme8", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme9", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 },{ 00:22:13.590 "params": { 00:22:13.590 "name": "Nvme10", 00:22:13.590 "trtype": "tcp", 00:22:13.590 "traddr": "10.0.0.2", 00:22:13.590 "adrfam": "ipv4", 00:22:13.590 "trsvcid": "4420", 00:22:13.590 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:13.590 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:13.590 "hdgst": false, 00:22:13.590 "ddgst": false 00:22:13.590 }, 00:22:13.590 "method": "bdev_nvme_attach_controller" 00:22:13.590 }' 00:22:13.590 [2024-10-14 16:47:18.175476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.590 [2024-10-14 16:47:18.216309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.967 Running I/O for 1 seconds... 00:22:16.162 2253.00 IOPS, 140.81 MiB/s 00:22:16.162 Latency(us) 00:22:16.162 [2024-10-14T14:47:20.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.162 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme1n1 : 1.13 286.59 17.91 0.00 0.00 220558.98 15853.47 207717.91 00:22:16.162 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme2n1 : 1.06 241.99 15.12 0.00 0.00 256634.15 20347.37 241671.80 00:22:16.162 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme3n1 : 1.12 284.92 17.81 0.00 0.00 215549.56 15791.06 218702.99 00:22:16.162 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme4n1 : 1.14 281.58 17.60 0.00 0.00 215585.45 15915.89 220700.28 00:22:16.162 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme5n1 : 1.15 283.52 17.72 0.00 0.00 210327.59 6647.22 205720.62 00:22:16.162 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme6n1 : 1.14 279.55 17.47 0.00 0.00 211010.51 20347.37 218702.99 00:22:16.162 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme7n1 : 1.15 277.24 17.33 0.00 0.00 209924.88 13606.52 220700.28 00:22:16.162 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme8n1 : 1.15 278.41 17.40 0.00 0.00 205944.54 13606.52 208716.56 00:22:16.162 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme9n1 : 1.16 275.85 17.24 0.00 0.00 205056.88 16976.94 221698.93 00:22:16.162 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:16.162 Verification LBA range: start 0x0 length 0x400 00:22:16.162 Nvme10n1 : 1.16 279.94 17.50 0.00 0.00 198915.08 1482.36 239674.51 00:22:16.162 [2024-10-14T14:47:20.796Z] =================================================================================================================== 00:22:16.162 [2024-10-14T14:47:20.796Z] Total : 2769.61 173.10 0.00 0.00 214083.01 1482.36 241671.80 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.421 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.421 rmmod nvme_tcp 00:22:16.421 rmmod nvme_fabrics 00:22:16.421 rmmod nvme_keyring 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 599802 ']' 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 599802 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 599802 ']' 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 599802 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 599802 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 599802' 00:22:16.422 killing process with pid 599802 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 599802 00:22:16.422 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 599802 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.990 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.894 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:18.894 00:22:18.894 real 0m15.742s 00:22:18.894 user 0m35.857s 00:22:18.894 sys 0m5.816s 00:22:18.894 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.894 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:18.894 ************************************ 00:22:18.894 END TEST nvmf_shutdown_tc1 00:22:18.894 ************************************ 00:22:18.894 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:18.894 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:18.895 ************************************ 00:22:18.895 START TEST nvmf_shutdown_tc2 00:22:18.895 ************************************ 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.895 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.895 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.895 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.895 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.896 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:18.896 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:22:19.154 00:22:19.154 --- 10.0.0.2 ping statistics --- 00:22:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.154 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:19.154 00:22:19.154 --- 10.0.0.1 ping statistics --- 00:22:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.154 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:19.154 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=601599 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 601599 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 601599 ']' 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.413 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.413 [2024-10-14 16:47:23.861266] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:19.413 [2024-10-14 16:47:23.861309] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.413 [2024-10-14 16:47:23.930536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.413 [2024-10-14 16:47:23.971971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.413 [2024-10-14 16:47:23.972005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.413 [2024-10-14 16:47:23.972012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.413 [2024-10-14 16:47:23.972018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.413 [2024-10-14 16:47:23.972023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.413 [2024-10-14 16:47:23.973627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.413 [2024-10-14 16:47:23.973735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.413 [2024-10-14 16:47:23.973839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.413 [2024-10-14 16:47:23.973840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.672 [2024-10-14 16:47:24.109274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.672 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.672 Malloc1 00:22:19.672 [2024-10-14 16:47:24.214980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.672 Malloc2 00:22:19.672 Malloc3 00:22:19.931 Malloc4 00:22:19.931 Malloc5 00:22:19.931 Malloc6 00:22:19.931 Malloc7 00:22:19.931 Malloc8 00:22:19.931 Malloc9 00:22:20.190 Malloc10 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=601680 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 601680 /var/tmp/bdevperf.sock 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 601680 ']' 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.190 { 00:22:20.190 "params": { 00:22:20.190 "name": "Nvme$subsystem", 00:22:20.190 "trtype": "$TEST_TRANSPORT", 00:22:20.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.190 "adrfam": "ipv4", 00:22:20.190 "trsvcid": "$NVMF_PORT", 00:22:20.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.190 "hdgst": ${hdgst:-false}, 00:22:20.190 "ddgst": ${ddgst:-false} 00:22:20.190 }, 00:22:20.190 "method": "bdev_nvme_attach_controller" 00:22:20.190 } 00:22:20.190 EOF 00:22:20.190 )") 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.190 { 00:22:20.190 "params": { 00:22:20.190 "name": "Nvme$subsystem", 00:22:20.190 "trtype": "$TEST_TRANSPORT", 00:22:20.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.190 "adrfam": "ipv4", 00:22:20.190 "trsvcid": "$NVMF_PORT", 00:22:20.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.190 "hdgst": ${hdgst:-false}, 00:22:20.190 "ddgst": ${ddgst:-false} 00:22:20.190 }, 00:22:20.190 "method": "bdev_nvme_attach_controller" 00:22:20.190 } 00:22:20.190 EOF 00:22:20.190 )") 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.190 { 00:22:20.190 "params": { 00:22:20.190 "name": "Nvme$subsystem", 00:22:20.190 "trtype": "$TEST_TRANSPORT", 00:22:20.190 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.190 "adrfam": "ipv4", 00:22:20.190 "trsvcid": "$NVMF_PORT", 00:22:20.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.190 "hdgst": ${hdgst:-false}, 00:22:20.190 "ddgst": ${ddgst:-false} 00:22:20.190 }, 00:22:20.190 "method": "bdev_nvme_attach_controller" 00:22:20.190 } 00:22:20.190 EOF 00:22:20.190 )") 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.190 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.190 { 00:22:20.190 "params": { 00:22:20.190 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.191 { 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.191 { 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.191 { 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 [2024-10-14 16:47:24.692334] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:20.191 [2024-10-14 16:47:24.692385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601680 ] 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.191 { 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.191 { 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:20.191 { 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme$subsystem", 00:22:20.191 "trtype": "$TEST_TRANSPORT", 00:22:20.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "$NVMF_PORT", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.191 "hdgst": ${hdgst:-false}, 00:22:20.191 "ddgst": ${ddgst:-false} 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 } 00:22:20.191 EOF 00:22:20.191 )") 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:20.191 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme1", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme2", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme3", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme4", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme5", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme6", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme7", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.191 },{ 00:22:20.191 "params": { 00:22:20.191 "name": "Nvme8", 00:22:20.191 "trtype": "tcp", 00:22:20.191 "traddr": "10.0.0.2", 00:22:20.191 "adrfam": "ipv4", 00:22:20.191 "trsvcid": "4420", 00:22:20.191 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:20.191 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:20.191 "hdgst": false, 00:22:20.191 "ddgst": false 00:22:20.191 }, 00:22:20.191 "method": "bdev_nvme_attach_controller" 00:22:20.192 },{ 00:22:20.192 "params": { 00:22:20.192 "name": "Nvme9", 00:22:20.192 "trtype": "tcp", 00:22:20.192 "traddr": "10.0.0.2", 00:22:20.192 "adrfam": "ipv4", 00:22:20.192 "trsvcid": "4420", 00:22:20.192 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:20.192 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:20.192 "hdgst": false, 00:22:20.192 "ddgst": false 00:22:20.192 }, 00:22:20.192 "method": "bdev_nvme_attach_controller" 00:22:20.192 },{ 00:22:20.192 "params": { 00:22:20.192 "name": "Nvme10", 00:22:20.192 "trtype": "tcp", 00:22:20.192 "traddr": "10.0.0.2", 00:22:20.192 "adrfam": "ipv4", 00:22:20.192 "trsvcid": "4420", 00:22:20.192 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:20.192 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:20.192 "hdgst": false, 00:22:20.192 "ddgst": false 00:22:20.192 }, 00:22:20.192 "method": "bdev_nvme_attach_controller" 00:22:20.192 }' 00:22:20.192 [2024-10-14 16:47:24.766185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.192 [2024-10-14 16:47:24.807072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.118 Running I/O for 10 seconds... 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:22.118 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:22.376 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:22.376 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 601680 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 601680 ']' 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 601680 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.377 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 601680 00:22:22.635 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.635 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.635 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 601680' 00:22:22.635 killing process with pid 601680 00:22:22.635 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 601680 00:22:22.635 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 601680 00:22:22.635 Received shutdown signal, test time was about 0.895669 seconds 00:22:22.635 00:22:22.635 Latency(us) 00:22:22.635 [2024-10-14T14:47:27.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.635 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme1n1 : 0.87 292.90 18.31 0.00 0.00 215921.62 15416.56 218702.99 00:22:22.635 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme2n1 : 0.89 297.10 18.57 0.00 0.00 208360.28 4525.10 213709.78 00:22:22.635 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme3n1 : 0.87 305.39 19.09 0.00 0.00 198201.67 3261.20 206719.27 00:22:22.635 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme4n1 : 0.88 291.42 18.21 0.00 0.00 205577.75 16852.11 208716.56 00:22:22.635 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme5n1 : 0.89 287.69 17.98 0.00 0.00 204575.70 18225.25 216705.71 00:22:22.635 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme6n1 : 0.89 286.76 17.92 0.00 0.00 201436.40 18724.57 212711.13 00:22:22.635 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme7n1 : 0.88 289.79 18.11 0.00 0.00 195273.63 18974.23 210713.84 00:22:22.635 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme8n1 : 0.90 286.03 17.88 0.00 0.00 194224.15 14105.84 218702.99 00:22:22.635 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme9n1 : 0.86 227.24 14.20 0.00 0.00 236510.80 3229.99 217704.35 00:22:22.635 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:22.635 Verification LBA range: start 0x0 length 0x400 00:22:22.635 Nvme10n1 : 0.86 222.11 13.88 0.00 0.00 237697.06 15978.30 233682.65 00:22:22.635 [2024-10-14T14:47:27.269Z] =================================================================================================================== 00:22:22.635 [2024-10-14T14:47:27.269Z] Total : 2786.43 174.15 0.00 0.00 208344.36 3229.99 233682.65 00:22:22.893 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 601599 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:23.914 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.915 rmmod nvme_tcp 00:22:23.915 rmmod nvme_fabrics 00:22:23.915 rmmod nvme_keyring 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 601599 ']' 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 601599 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 601599 ']' 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 601599 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 601599 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 601599' 00:22:23.915 killing process with pid 601599 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 601599 00:22:23.915 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 601599 00:22:24.200 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.201 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.777 00:22:26.777 real 0m7.388s 00:22:26.777 user 0m21.760s 00:22:26.777 sys 0m1.371s 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:26.777 ************************************ 00:22:26.777 END TEST nvmf_shutdown_tc2 00:22:26.777 ************************************ 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:26.777 ************************************ 00:22:26.777 START TEST nvmf_shutdown_tc3 00:22:26.777 ************************************ 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.777 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:26.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:26.778 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:26.778 Found net devices under 0000:86:00.0: cvl_0_0 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:26.778 Found net devices under 0000:86:00.1: cvl_0_1 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.778 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:22:26.778 00:22:26.778 --- 10.0.0.2 ping statistics --- 00:22:26.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.778 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:26.778 00:22:26.778 --- 10.0.0.1 ping statistics --- 00:22:26.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.778 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=602928 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 602928 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 602928 ']' 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.778 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.779 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.779 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.779 [2024-10-14 16:47:31.334161] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:26.779 [2024-10-14 16:47:31.334203] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.779 [2024-10-14 16:47:31.404590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.037 [2024-10-14 16:47:31.447229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.037 [2024-10-14 16:47:31.447267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.037 [2024-10-14 16:47:31.447274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.037 [2024-10-14 16:47:31.447280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.037 [2024-10-14 16:47:31.447286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.037 [2024-10-14 16:47:31.450619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.037 [2024-10-14 16:47:31.450708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.037 [2024-10-14 16:47:31.450818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.037 [2024-10-14 16:47:31.450818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.605 [2024-10-14 16:47:32.217500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.605 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.863 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.863 Malloc1 00:22:27.863 [2024-10-14 16:47:32.333254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.863 Malloc2 00:22:27.863 Malloc3 00:22:27.863 Malloc4 00:22:27.863 Malloc5 00:22:28.122 Malloc6 00:22:28.122 Malloc7 00:22:28.122 Malloc8 00:22:28.122 Malloc9 00:22:28.122 Malloc10 00:22:28.122 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.122 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:28.122 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.122 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=603210 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 603210 /var/tmp/bdevperf.sock 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 603210 ']' 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.381 { 00:22:28.381 "params": { 00:22:28.381 "name": "Nvme$subsystem", 00:22:28.381 "trtype": "$TEST_TRANSPORT", 00:22:28.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.381 "adrfam": "ipv4", 00:22:28.381 "trsvcid": "$NVMF_PORT", 00:22:28.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.381 "hdgst": ${hdgst:-false}, 00:22:28.381 "ddgst": ${ddgst:-false} 00:22:28.381 }, 00:22:28.381 "method": "bdev_nvme_attach_controller" 00:22:28.381 } 00:22:28.381 EOF 00:22:28.381 )") 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.381 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.381 { 00:22:28.381 "params": { 00:22:28.381 "name": "Nvme$subsystem", 00:22:28.381 "trtype": "$TEST_TRANSPORT", 00:22:28.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 [2024-10-14 16:47:32.808472] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:28.382 [2024-10-14 16:47:32.808518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603210 ] 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:28.382 { 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme$subsystem", 00:22:28.382 "trtype": "$TEST_TRANSPORT", 00:22:28.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "$NVMF_PORT", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.382 "hdgst": ${hdgst:-false}, 00:22:28.382 "ddgst": ${ddgst:-false} 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 } 00:22:28.382 EOF 00:22:28.382 )") 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:28.382 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme1", 00:22:28.382 "trtype": "tcp", 00:22:28.382 "traddr": "10.0.0.2", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "4420", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.382 "hdgst": false, 00:22:28.382 "ddgst": false 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 },{ 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme2", 00:22:28.382 "trtype": "tcp", 00:22:28.382 "traddr": "10.0.0.2", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "4420", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:28.382 "hdgst": false, 00:22:28.382 "ddgst": false 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 },{ 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme3", 00:22:28.382 "trtype": "tcp", 00:22:28.382 "traddr": "10.0.0.2", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "4420", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:28.382 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:28.382 "hdgst": false, 00:22:28.382 "ddgst": false 00:22:28.382 }, 00:22:28.382 "method": "bdev_nvme_attach_controller" 00:22:28.382 },{ 00:22:28.382 "params": { 00:22:28.382 "name": "Nvme4", 00:22:28.382 "trtype": "tcp", 00:22:28.382 "traddr": "10.0.0.2", 00:22:28.382 "adrfam": "ipv4", 00:22:28.382 "trsvcid": "4420", 00:22:28.382 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 },{ 00:22:28.383 "params": { 00:22:28.383 "name": "Nvme5", 00:22:28.383 "trtype": "tcp", 00:22:28.383 "traddr": "10.0.0.2", 00:22:28.383 "adrfam": "ipv4", 00:22:28.383 "trsvcid": "4420", 00:22:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 },{ 00:22:28.383 "params": { 00:22:28.383 "name": "Nvme6", 00:22:28.383 "trtype": "tcp", 00:22:28.383 "traddr": "10.0.0.2", 00:22:28.383 "adrfam": "ipv4", 00:22:28.383 "trsvcid": "4420", 00:22:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 },{ 00:22:28.383 "params": { 00:22:28.383 "name": "Nvme7", 00:22:28.383 "trtype": "tcp", 00:22:28.383 "traddr": "10.0.0.2", 00:22:28.383 "adrfam": "ipv4", 00:22:28.383 "trsvcid": "4420", 00:22:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 },{ 00:22:28.383 "params": { 00:22:28.383 "name": "Nvme8", 00:22:28.383 "trtype": "tcp", 00:22:28.383 "traddr": "10.0.0.2", 00:22:28.383 "adrfam": "ipv4", 00:22:28.383 "trsvcid": "4420", 00:22:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 },{ 00:22:28.383 "params": { 00:22:28.383 "name": "Nvme9", 00:22:28.383 "trtype": "tcp", 00:22:28.383 "traddr": "10.0.0.2", 00:22:28.383 "adrfam": "ipv4", 00:22:28.383 "trsvcid": "4420", 00:22:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 },{ 00:22:28.383 "params": { 00:22:28.383 "name": "Nvme10", 00:22:28.383 "trtype": "tcp", 00:22:28.383 "traddr": "10.0.0.2", 00:22:28.383 "adrfam": "ipv4", 00:22:28.383 "trsvcid": "4420", 00:22:28.383 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:28.383 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:28.383 "hdgst": false, 00:22:28.383 "ddgst": false 00:22:28.383 }, 00:22:28.383 "method": "bdev_nvme_attach_controller" 00:22:28.383 }' 00:22:28.383 [2024-10-14 16:47:32.878815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.383 [2024-10-14 16:47:32.919565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.285 Running I/O for 10 seconds... 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.285 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:30.286 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:30.544 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:30.544 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:30.544 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.544 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.544 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.544 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.544 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.544 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:30.544 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:30.544 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 602928 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 602928 ']' 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 602928 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 602928 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 602928' 00:22:30.818 killing process with pid 602928 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 602928 00:22:30.818 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 602928 00:22:30.818 [2024-10-14 16:47:35.375118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.818 [2024-10-14 16:47:35.375613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e030 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.380056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e520 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.380095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e520 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.380104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e520 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.381855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9e9f0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.819 [2024-10-14 16:47:35.383209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.383464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9eee0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.820 [2024-10-14 16:47:35.384547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.384629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f3b0 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12[2024-10-14 16:47:35.385857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 he state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 [2024-10-14 16:47:35.385880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 [2024-10-14 16:47:35.385894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 [2024-10-14 16:47:35.385902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:12[2024-10-14 16:47:35.385909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 he state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 16:47:35.385921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 he state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 [2024-10-14 16:47:35.385938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 [2024-10-14 16:47:35.385945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 [2024-10-14 16:47:35.385953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 [2024-10-14 16:47:35.385960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with t[2024-10-14 16:47:35.385967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12he state(6) to be set 00:22:30.821 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 [2024-10-14 16:47:35.385976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 [2024-10-14 16:47:35.385984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.821 [2024-10-14 16:47:35.385992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.385995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.821 [2024-10-14 16:47:35.385999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.821 [2024-10-14 16:47:35.386005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.822 [2024-10-14 16:47:35.386012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with the state(6) to be set 00:22:30.822 [2024-10-14 16:47:35.386021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12[2024-10-14 16:47:35.386022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 he state(6) to be set 00:22:30.822 [2024-10-14 16:47:35.386030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-14 16:47:35.386031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f880 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 he state(6) to be set 00:22:30.822 [2024-10-14 16:47:35.386046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.822 [2024-10-14 16:47:35.386514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.822 [2024-10-14 16:47:35.386520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.823 [2024-10-14 16:47:35.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.386966] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa2c830 was disconnected and freed. reset controller. 00:22:30.823 [2024-10-14 16:47:35.387033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-14 16:47:35.387083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 he state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0b3c0 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with t[2024-10-14 16:47:35.387121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:22:30.823 id:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.823 [2024-10-14 16:47:35.387183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e6650 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.823 [2024-10-14 16:47:35.387210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.823 [2024-10-14 16:47:35.387218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-10-14 16:47:35.387247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 he state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-14 16:47:35.387255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 he state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f530 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-14 16:47:35.387312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 he state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df6d0 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with t[2024-10-14 16:47:35.387459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9e10 is same whe state(6) to be set 00:22:30.824 ith the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-10-14 16:47:35.387487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 he state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-10-14 16:47:35.387495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 he state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with t[2024-10-14 16:47:35.387513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:30.824 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with t[2024-10-14 16:47:35.387531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:22:30.824 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41b90 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9fd50 is same with the state(6) to be set 00:22:30.824 [2024-10-14 16:47:35.387573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.824 [2024-10-14 16:47:35.387617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.824 [2024-10-14 16:47:35.387624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.825 [2024-10-14 16:47:35.387630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea270 is same with the state(6) to be set 00:22:30.825 [2024-10-14 16:47:35.387658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.825 [2024-10-14 16:47:35.387667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.825 [2024-10-14 16:47:35.387680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.825 [2024-10-14 16:47:35.387694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.825 [2024-10-14 16:47:35.387706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42800 is same with the state(6) to be set 00:22:30.825 [2024-10-14 16:47:35.387818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.387992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.387998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.825 [2024-10-14 16:47:35.388351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.825 [2024-10-14 16:47:35.388348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.825 [2024-10-14 16:47:35.388358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.826 [2024-10-14 16:47:35.388364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.826 [2024-10-14 16:47:35.388372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.826 [2024-10-14 16:47:35.388379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.826 [2024-10-14 16:47:35.388387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.826 [2024-10-14 16:47:35.388394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.826 [2024-10-14 16:47:35.388404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.826 [2024-10-14 16:47:35.388412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.388951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0240 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.389555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0710 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.389574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0710 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.389581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0710 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.389587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0710 is same with the state(6) to be set 00:22:30.826 [2024-10-14 16:47:35.403199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.826 [2024-10-14 16:47:35.403224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.826 [2024-10-14 16:47:35.403236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.826 [2024-10-14 16:47:35.403245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.826 [2024-10-14 16:47:35.403257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.826 [2024-10-14 16:47:35.403267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.403734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.403818] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9ef730 was disconnected and freed. reset controller. 00:22:30.827 [2024-10-14 16:47:35.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.827 [2024-10-14 16:47:35.404893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.827 [2024-10-14 16:47:35.404903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.404924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.404932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.404943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.404965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.404974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.404985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.404995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.828 [2024-10-14 16:47:35.405741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.828 [2024-10-14 16:47:35.405752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.405761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.405772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.405781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.405791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.405799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.405839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:30.829 [2024-10-14 16:47:35.405900] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa2b320 was disconnected and freed. reset controller. 00:22:30.829 [2024-10-14 16:47:35.407148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3bcd0 is same with the state(6) to be set 00:22:30.829 [2024-10-14 16:47:35.407258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0b3c0 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e6650 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f530 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7df6d0 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9e10 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41b90 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ea270 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42800 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.407430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.829 [2024-10-14 16:47:35.407499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.407507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc479d0 is same with the state(6) to be set 00:22:30.829 [2024-10-14 16:47:35.411041] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:30.829 [2024-10-14 16:47:35.411089] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:30.829 [2024-10-14 16:47:35.412006] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:30.829 [2024-10-14 16:47:35.412048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc479d0 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.412249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.829 [2024-10-14 16:47:35.412275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc41b90 with addr=10.0.0.2, port=4420 00:22:30.829 [2024-10-14 16:47:35.412290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41b90 is same with the state(6) to be set 00:22:30.829 [2024-10-14 16:47:35.412452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.829 [2024-10-14 16:47:35.412471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e9e10 with addr=10.0.0.2, port=4420 00:22:30.829 [2024-10-14 16:47:35.412485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9e10 is same with the state(6) to be set 00:22:30.829 [2024-10-14 16:47:35.413033] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.413111] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.413813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41b90 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.413842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9e10 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.413905] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.414012] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.414076] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.414139] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.414200] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:30.829 [2024-10-14 16:47:35.414410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.829 [2024-10-14 16:47:35.414433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc479d0 with addr=10.0.0.2, port=4420 00:22:30.829 [2024-10-14 16:47:35.414449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc479d0 is same with the state(6) to be set 00:22:30.829 [2024-10-14 16:47:35.414464] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:30.829 [2024-10-14 16:47:35.414477] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:30.829 [2024-10-14 16:47:35.414492] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:30.829 [2024-10-14 16:47:35.414516] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:30.829 [2024-10-14 16:47:35.414529] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:30.829 [2024-10-14 16:47:35.414541] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:30.829 [2024-10-14 16:47:35.414671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.829 [2024-10-14 16:47:35.414688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.829 [2024-10-14 16:47:35.414702] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc479d0 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.414766] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:30.829 [2024-10-14 16:47:35.414781] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:30.829 [2024-10-14 16:47:35.414793] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:30.829 [2024-10-14 16:47:35.414855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.829 [2024-10-14 16:47:35.417145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3bcd0 (9): Bad file descriptor 00:22:30.829 [2024-10-14 16:47:35.417361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.829 [2024-10-14 16:47:35.417681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.829 [2024-10-14 16:47:35.417694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.417973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.417985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.830 [2024-10-14 16:47:35.418542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.830 [2024-10-14 16:47:35.418550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.418862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.418871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee510 is same with the state(6) to be set 00:22:30.831 [2024-10-14 16:47:35.420095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.831 [2024-10-14 16:47:35.420586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.831 [2024-10-14 16:47:35.420598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.420989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.421336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.421345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf1d0 is same with the state(6) to be set 00:22:30.832 [2024-10-14 16:47:35.422564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.422579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.832 [2024-10-14 16:47:35.422592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.832 [2024-10-14 16:47:35.422605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.422986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.422994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.833 [2024-10-14 16:47:35.423418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.833 [2024-10-14 16:47:35.423431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.423822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.423831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbec600 is same with the state(6) to be set 00:22:30.834 [2024-10-14 16:47:35.425056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.834 [2024-10-14 16:47:35.425446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.834 [2024-10-14 16:47:35.425455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.425992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.835 [2024-10-14 16:47:35.426209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.835 [2024-10-14 16:47:35.426218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.426228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.426236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.426246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.426256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.426267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.426275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.426286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.426294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.426307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.426315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.426324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbedaf0 is same with the state(6) to be set 00:22:30.836 [2024-10-14 16:47:35.427527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.427979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.427988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.836 [2024-10-14 16:47:35.428258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.836 [2024-10-14 16:47:35.428265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.428704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.428711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef020 is same with the state(6) to be set 00:22:30.837 [2024-10-14 16:47:35.429709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.837 [2024-10-14 16:47:35.429929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.837 [2024-10-14 16:47:35.429936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.429945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.429951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.429959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.429966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.429974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.429981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.429989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.429996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.838 [2024-10-14 16:47:35.430565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.838 [2024-10-14 16:47:35.430572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.430716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.430724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf05a0 is same with the state(6) to be set 00:22:30.839 [2024-10-14 16:47:35.431679] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.839 [2024-10-14 16:47:35.431696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:30.839 [2024-10-14 16:47:35.431706] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:30.839 [2024-10-14 16:47:35.431715] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:30.839 [2024-10-14 16:47:35.431787] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:30.839 [2024-10-14 16:47:35.431803] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:30.839 [2024-10-14 16:47:35.431865] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:30.839 [2024-10-14 16:47:35.431876] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:30.839 [2024-10-14 16:47:35.432086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.839 [2024-10-14 16:47:35.432101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ea270 with addr=10.0.0.2, port=4420 00:22:30.839 [2024-10-14 16:47:35.432110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ea270 is same with the state(6) to be set 00:22:30.839 [2024-10-14 16:47:35.432240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.839 [2024-10-14 16:47:35.432250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7df6d0 with addr=10.0.0.2, port=4420 00:22:30.839 [2024-10-14 16:47:35.432258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7df6d0 is same with the state(6) to be set 00:22:30.839 [2024-10-14 16:47:35.432385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.839 [2024-10-14 16:47:35.432396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e6650 with addr=10.0.0.2, port=4420 00:22:30.839 [2024-10-14 16:47:35.432403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e6650 is same with the state(6) to be set 00:22:30.839 [2024-10-14 16:47:35.432551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.839 [2024-10-14 16:47:35.432566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0f530 with addr=10.0.0.2, port=4420 00:22:30.839 [2024-10-14 16:47:35.432573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0f530 is same with the state(6) to be set 00:22:30.839 [2024-10-14 16:47:35.433927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.433944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.433961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.433969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.433978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.433986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.433995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.839 [2024-10-14 16:47:35.434243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.839 [2024-10-14 16:47:35.434252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.840 [2024-10-14 16:47:35.434916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.840 [2024-10-14 16:47:35.434923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-10-14 16:47:35.434931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-10-14 16:47:35.434941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-10-14 16:47:35.434950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-10-14 16:47:35.434957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-10-14 16:47:35.434965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.841 [2024-10-14 16:47:35.434972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.841 [2024-10-14 16:47:35.434979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf1b20 is same with the state(6) to be set 00:22:30.841 [2024-10-14 16:47:35.435933] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:30.841 [2024-10-14 16:47:35.435952] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:30.841 [2024-10-14 16:47:35.435962] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:31.099 task offset: 24576 on job bdev=Nvme10n1 fails 00:22:31.099 00:22:31.099 Latency(us) 00:22:31.099 [2024-10-14T14:47:35.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.099 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.099 Job: Nvme1n1 ended in about 0.94 seconds with error 00:22:31.099 Verification LBA range: start 0x0 length 0x400 00:22:31.099 Nvme1n1 : 0.94 204.31 12.77 68.10 0.00 232695.71 16727.28 216705.71 00:22:31.099 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.099 Job: Nvme2n1 ended in about 0.93 seconds with error 00:22:31.099 Verification LBA range: start 0x0 length 0x400 00:22:31.099 Nvme2n1 : 0.93 271.42 16.96 68.93 0.00 183057.77 16352.79 205720.62 00:22:31.099 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.099 Job: Nvme3n1 ended in about 0.94 seconds with error 00:22:31.099 Verification LBA range: start 0x0 length 0x400 00:22:31.099 Nvme3n1 : 0.94 203.78 12.74 67.93 0.00 225597.81 15603.81 216705.71 00:22:31.099 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.099 Job: Nvme4n1 ended in about 0.94 seconds with error 00:22:31.099 Verification LBA range: start 0x0 length 0x400 00:22:31.099 Nvme4n1 : 0.94 203.24 12.70 67.75 0.00 222358.43 22094.99 235679.94 00:22:31.099 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.099 Job: Nvme5n1 ended in about 0.95 seconds with error 00:22:31.099 Verification LBA range: start 0x0 length 0x400 00:22:31.099 Nvme5n1 : 0.95 202.71 12.67 67.57 0.00 219175.01 16227.96 211712.49 00:22:31.099 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.099 Job: Nvme6n1 ended in about 0.95 seconds with error 00:22:31.099 Verification LBA range: start 0x0 length 0x400 00:22:31.100 Nvme6n1 : 0.95 202.21 12.64 67.40 0.00 215893.58 16477.62 214708.42 00:22:31.100 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.100 Job: Nvme7n1 ended in about 0.95 seconds with error 00:22:31.100 Verification LBA range: start 0x0 length 0x400 00:22:31.100 Nvme7n1 : 0.95 201.79 12.61 67.26 0.00 212573.38 14542.75 213709.78 00:22:31.100 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.100 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:31.100 Verification LBA range: start 0x0 length 0x400 00:22:31.100 Nvme8n1 : 0.96 200.89 12.56 66.96 0.00 209791.02 15978.30 210713.84 00:22:31.100 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.100 Job: Nvme9n1 ended in about 0.93 seconds with error 00:22:31.100 Verification LBA range: start 0x0 length 0x400 00:22:31.100 Nvme9n1 : 0.93 206.37 12.90 68.79 0.00 199522.56 5367.71 242670.45 00:22:31.100 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:31.100 Job: Nvme10n1 ended in about 0.93 seconds with error 00:22:31.100 Verification LBA range: start 0x0 length 0x400 00:22:31.100 Nvme10n1 : 0.93 207.17 12.95 69.06 0.00 194826.24 21595.67 225693.50 00:22:31.100 [2024-10-14T14:47:35.734Z] =================================================================================================================== 00:22:31.100 [2024-10-14T14:47:35.734Z] Total : 2103.86 131.49 679.75 0.00 210896.68 5367.71 242670.45 00:22:31.100 [2024-10-14 16:47:35.466614] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:31.100 [2024-10-14 16:47:35.466662] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:31.100 [2024-10-14 16:47:35.466992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.100 [2024-10-14 16:47:35.467012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0b3c0 with addr=10.0.0.2, port=4420 00:22:31.100 [2024-10-14 16:47:35.467023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0b3c0 is same with the state(6) to be set 00:22:31.100 [2024-10-14 16:47:35.467159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.100 [2024-10-14 16:47:35.467171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc42800 with addr=10.0.0.2, port=4420 00:22:31.100 [2024-10-14 16:47:35.467178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc42800 is same with the state(6) to be set 00:22:31.100 [2024-10-14 16:47:35.467193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ea270 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.467205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7df6d0 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.467214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e6650 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.467223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0f530 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.467576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.100 [2024-10-14 16:47:35.467592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7e9e10 with addr=10.0.0.2, port=4420 00:22:31.100 [2024-10-14 16:47:35.467606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9e10 is same with the state(6) to be set 00:22:31.100 [2024-10-14 16:47:35.467829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.100 [2024-10-14 16:47:35.467841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc41b90 with addr=10.0.0.2, port=4420 00:22:31.100 [2024-10-14 16:47:35.467849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41b90 is same with the state(6) to be set 00:22:31.100 [2024-10-14 16:47:35.467935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.100 [2024-10-14 16:47:35.467946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc479d0 with addr=10.0.0.2, port=4420 00:22:31.100 [2024-10-14 16:47:35.467954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc479d0 is same with the state(6) to be set 00:22:31.100 [2024-10-14 16:47:35.468037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.100 [2024-10-14 16:47:35.468049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3bcd0 with addr=10.0.0.2, port=4420 00:22:31.100 [2024-10-14 16:47:35.468057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3bcd0 is same with the state(6) to be set 00:22:31.100 [2024-10-14 16:47:35.468068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0b3c0 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.468077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc42800 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.468091] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468098] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468107] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468119] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468126] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468133] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468142] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468149] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468156] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468165] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468173] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468179] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468209] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.100 [2024-10-14 16:47:35.468221] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.100 [2024-10-14 16:47:35.468232] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.100 [2024-10-14 16:47:35.468242] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.100 [2024-10-14 16:47:35.468252] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.100 [2024-10-14 16:47:35.468262] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:31.100 [2024-10-14 16:47:35.468547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.468559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.468565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.468571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.468579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9e10 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.468589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41b90 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.468598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc479d0 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.468614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3bcd0 (9): Bad file descriptor 00:22:31.100 [2024-10-14 16:47:35.468622] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468628] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468635] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468644] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468655] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468662] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.468938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.468944] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468952] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468959] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468968] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468974] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.468981] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:31.100 [2024-10-14 16:47:35.468990] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.468997] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.469003] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:31.100 [2024-10-14 16:47:35.469012] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:31.100 [2024-10-14 16:47:35.469019] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:31.100 [2024-10-14 16:47:35.469026] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:31.100 [2024-10-14 16:47:35.469056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.469064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.469070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.100 [2024-10-14 16:47:35.469076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.359 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 603210 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 603210 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 603210 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:32.296 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.297 rmmod nvme_tcp 00:22:32.297 rmmod nvme_fabrics 00:22:32.297 rmmod nvme_keyring 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 602928 ']' 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 602928 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 602928 ']' 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 602928 00:22:32.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (602928) - No such process 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 602928 is not found' 00:22:32.297 Process with pid 602928 is not found 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.297 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.830 00:22:34.830 real 0m7.969s 00:22:34.830 user 0m20.018s 00:22:34.830 sys 0m1.378s 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:34.830 ************************************ 00:22:34.830 END TEST nvmf_shutdown_tc3 00:22:34.830 ************************************ 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:34.830 ************************************ 00:22:34.830 START TEST nvmf_shutdown_tc4 00:22:34.830 ************************************ 00:22:34.830 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:34.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:34.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:34.831 Found net devices under 0000:86:00.0: cvl_0_0 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:34.831 Found net devices under 0000:86:00.1: cvl_0_1 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.831 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:22:34.832 00:22:34.832 --- 10.0.0.2 ping statistics --- 00:22:34.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.832 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:34.832 00:22:34.832 --- 10.0.0.1 ping statistics --- 00:22:34.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.832 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=604471 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 604471 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 604471 ']' 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.832 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:34.832 [2024-10-14 16:47:39.370585] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:34.832 [2024-10-14 16:47:39.370633] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.832 [2024-10-14 16:47:39.443919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.091 [2024-10-14 16:47:39.485331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.091 [2024-10-14 16:47:39.485366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.091 [2024-10-14 16:47:39.485374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.091 [2024-10-14 16:47:39.485380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.091 [2024-10-14 16:47:39.485385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.091 [2024-10-14 16:47:39.487000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.091 [2024-10-14 16:47:39.487108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.091 [2024-10-14 16:47:39.487213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.091 [2024-10-14 16:47:39.487214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.659 [2024-10-14 16:47:40.244985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.659 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.918 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.918 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:35.918 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.918 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.918 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:35.918 Malloc1 00:22:35.918 [2024-10-14 16:47:40.350998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.918 Malloc2 00:22:35.918 Malloc3 00:22:35.918 Malloc4 00:22:35.918 Malloc5 00:22:35.918 Malloc6 00:22:36.176 Malloc7 00:22:36.176 Malloc8 00:22:36.176 Malloc9 00:22:36.176 Malloc10 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=604750 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:36.176 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:36.434 [2024-10-14 16:47:40.851221] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 604471 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 604471 ']' 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 604471 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 604471 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 604471' 00:22:41.714 killing process with pid 604471 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 604471 00:22:41.714 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 604471 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 starting I/O failed: -6 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.714 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 [2024-10-14 16:47:45.851812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 [2024-10-14 16:47:45.852751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 [2024-10-14 16:47:45.853735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.715 Write completed with error (sct=0, sc=8) 00:22:41.715 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 [2024-10-14 16:47:45.855192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.716 NVMe io qpair process completion error 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 [2024-10-14 16:47:45.856131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 [2024-10-14 16:47:45.856981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 starting I/O failed: -6 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 [2024-10-14 16:47:45.857356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with starting I/O failed: -6 00:22:41.716 the state(6) to be set 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 [2024-10-14 16:47:45.857396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.716 [2024-10-14 16:47:45.857404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.716 starting I/O failed: -6 00:22:41.716 [2024-10-14 16:47:45.857412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.716 [2024-10-14 16:47:45.857419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.716 Write completed with error (sct=0, sc=8) 00:22:41.717 [2024-10-14 16:47:45.857425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.717 starting I/O failed: -6 00:22:41.717 [2024-10-14 16:47:45.857432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.857439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 [2024-10-14 16:47:45.857445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.717 starting I/O failed: -6 00:22:41.717 [2024-10-14 16:47:45.857452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21513f0 is same with the state(6) to be set 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 [2024-10-14 16:47:45.858000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 [2024-10-14 16:47:45.859969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.717 NVMe io qpair process completion error 00:22:41.717 [2024-10-14 16:47:45.861319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f769e0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.861353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f769e0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.861362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f769e0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.861370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f769e0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.861377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f769e0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.861383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f769e0 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.861979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76040 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.862006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76040 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.862014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76040 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.862021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76040 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.862028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76040 is same with the state(6) to be set 00:22:41.717 [2024-10-14 16:47:45.862034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f76040 is same with the state(6) to be set 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 starting I/O failed: -6 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.717 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 [2024-10-14 16:47:45.863409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.718 NVMe io qpair process completion error 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 [2024-10-14 16:47:45.864455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 [2024-10-14 16:47:45.865307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.718 Write completed with error (sct=0, sc=8) 00:22:41.718 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 [2024-10-14 16:47:45.866295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 [2024-10-14 16:47:45.867824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.719 NVMe io qpair process completion error 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 starting I/O failed: -6 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.719 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 [2024-10-14 16:47:45.868815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 [2024-10-14 16:47:45.869742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 [2024-10-14 16:47:45.870721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.720 starting I/O failed: -6 00:22:41.720 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 [2024-10-14 16:47:45.872481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.721 NVMe io qpair process completion error 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 [2024-10-14 16:47:45.873502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 [2024-10-14 16:47:45.874363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.721 starting I/O failed: -6 00:22:41.721 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 [2024-10-14 16:47:45.875402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 [2024-10-14 16:47:45.885001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.722 NVMe io qpair process completion error 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 starting I/O failed: -6 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.722 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 [2024-10-14 16:47:45.885953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 [2024-10-14 16:47:45.886880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 [2024-10-14 16:47:45.888095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.723 Write completed with error (sct=0, sc=8) 00:22:41.723 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 [2024-10-14 16:47:45.889947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.724 NVMe io qpair process completion error 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 [2024-10-14 16:47:45.891091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 starting I/O failed: -6 00:22:41.724 [2024-10-14 16:47:45.892188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.724 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 [2024-10-14 16:47:45.893415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 [2024-10-14 16:47:45.900643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.725 NVMe io qpair process completion error 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 starting I/O failed: -6 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.725 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 [2024-10-14 16:47:45.901627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 [2024-10-14 16:47:45.902555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 [2024-10-14 16:47:45.903593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.726 Write completed with error (sct=0, sc=8) 00:22:41.726 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 [2024-10-14 16:47:45.906530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.727 NVMe io qpair process completion error 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 [2024-10-14 16:47:45.907611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 [2024-10-14 16:47:45.908539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.727 starting I/O failed: -6 00:22:41.727 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 [2024-10-14 16:47:45.909627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 starting I/O failed: -6 00:22:41.728 [2024-10-14 16:47:45.912105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:41.728 NVMe io qpair process completion error 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.728 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Write completed with error (sct=0, sc=8) 00:22:41.729 Initializing NVMe Controllers 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:41.729 Controller IO queue size 128, less than required. 00:22:41.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:41.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:41.729 Initialization complete. Launching workers. 00:22:41.729 ======================================================== 00:22:41.729 Latency(us) 00:22:41.729 Device Information : IOPS MiB/s Average min max 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2204.67 94.73 58065.28 950.92 105173.03 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2198.07 94.45 58261.53 958.16 113157.09 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2199.99 94.53 58292.40 940.73 123418.53 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2204.45 94.72 58202.80 738.28 125238.89 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2197.22 94.41 57658.48 751.51 99670.28 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2185.31 93.90 57980.95 865.33 97478.75 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2201.05 94.58 58073.49 533.30 123862.90 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2178.07 93.59 58186.58 693.07 95985.30 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2205.09 94.75 57483.29 683.78 98414.84 00:22:41.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2249.77 96.67 56355.85 887.66 92984.84 00:22:41.729 ======================================================== 00:22:41.729 Total : 22023.69 946.33 57852.29 533.30 125238.89 00:22:41.729 00:22:41.729 [2024-10-14 16:47:45.921519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1485e60 is same with the state(6) to be set 00:22:41.729 [2024-10-14 16:47:45.921570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147f960 is same with the state(6) to be set 00:22:41.729 [2024-10-14 16:47:45.921599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147fc90 is same with the state(6) to be set 00:22:41.729 [2024-10-14 16:47:45.921652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14864c0 is same with the state(6) to be set 00:22:41.729 [2024-10-14 16:47:45.921682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14817f0 is same with the state(6) to be set 00:22:41.730 [2024-10-14 16:47:45.921711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14819d0 is same with the state(6) to be set 00:22:41.730 [2024-10-14 16:47:45.921739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486190 is same with the state(6) to be set 00:22:41.730 [2024-10-14 16:47:45.921778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147ffc0 is same with the state(6) to be set 00:22:41.730 [2024-10-14 16:47:45.921806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147f630 is same with the state(6) to be set 00:22:41.730 [2024-10-14 16:47:45.921835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481bb0 is same with the state(6) to be set 00:22:41.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:41.730 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 604750 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 604750 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 604750 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.666 rmmod nvme_tcp 00:22:42.666 rmmod nvme_fabrics 00:22:42.666 rmmod nvme_keyring 00:22:42.666 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 604471 ']' 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 604471 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 604471 ']' 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 604471 00:22:42.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (604471) - No such process 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 604471 is not found' 00:22:42.925 Process with pid 604471 is not found 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.925 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.830 00:22:44.830 real 0m10.379s 00:22:44.830 user 0m27.453s 00:22:44.830 sys 0m5.272s 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:44.830 ************************************ 00:22:44.830 END TEST nvmf_shutdown_tc4 00:22:44.830 ************************************ 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:44.830 00:22:44.830 real 0m41.995s 00:22:44.830 user 1m45.322s 00:22:44.830 sys 0m14.156s 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:44.830 ************************************ 00:22:44.830 END TEST nvmf_shutdown 00:22:44.830 ************************************ 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:44.830 00:22:44.830 real 11m36.669s 00:22:44.830 user 25m3.776s 00:22:44.830 sys 3m32.480s 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.830 16:47:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.830 ************************************ 00:22:44.830 END TEST nvmf_target_extra 00:22:44.830 ************************************ 00:22:45.089 16:47:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:45.089 16:47:49 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.089 16:47:49 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.089 16:47:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.089 ************************************ 00:22:45.089 START TEST nvmf_host 00:22:45.089 ************************************ 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:45.089 * Looking for test storage... 00:22:45.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.089 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:45.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.090 --rc genhtml_branch_coverage=1 00:22:45.090 --rc genhtml_function_coverage=1 00:22:45.090 --rc genhtml_legend=1 00:22:45.090 --rc geninfo_all_blocks=1 00:22:45.090 --rc geninfo_unexecuted_blocks=1 00:22:45.090 00:22:45.090 ' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:45.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.090 --rc genhtml_branch_coverage=1 00:22:45.090 --rc genhtml_function_coverage=1 00:22:45.090 --rc genhtml_legend=1 00:22:45.090 --rc geninfo_all_blocks=1 00:22:45.090 --rc geninfo_unexecuted_blocks=1 00:22:45.090 00:22:45.090 ' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:45.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.090 --rc genhtml_branch_coverage=1 00:22:45.090 --rc genhtml_function_coverage=1 00:22:45.090 --rc genhtml_legend=1 00:22:45.090 --rc geninfo_all_blocks=1 00:22:45.090 --rc geninfo_unexecuted_blocks=1 00:22:45.090 00:22:45.090 ' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:45.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.090 --rc genhtml_branch_coverage=1 00:22:45.090 --rc genhtml_function_coverage=1 00:22:45.090 --rc genhtml_legend=1 00:22:45.090 --rc geninfo_all_blocks=1 00:22:45.090 --rc geninfo_unexecuted_blocks=1 00:22:45.090 00:22:45.090 ' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.090 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.350 ************************************ 00:22:45.350 START TEST nvmf_multicontroller 00:22:45.350 ************************************ 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:45.350 * Looking for test storage... 00:22:45.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:45.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.350 --rc genhtml_branch_coverage=1 00:22:45.350 --rc genhtml_function_coverage=1 00:22:45.350 --rc genhtml_legend=1 00:22:45.350 --rc geninfo_all_blocks=1 00:22:45.350 --rc geninfo_unexecuted_blocks=1 00:22:45.350 00:22:45.350 ' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:45.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.350 --rc genhtml_branch_coverage=1 00:22:45.350 --rc genhtml_function_coverage=1 00:22:45.350 --rc genhtml_legend=1 00:22:45.350 --rc geninfo_all_blocks=1 00:22:45.350 --rc geninfo_unexecuted_blocks=1 00:22:45.350 00:22:45.350 ' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:45.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.350 --rc genhtml_branch_coverage=1 00:22:45.350 --rc genhtml_function_coverage=1 00:22:45.350 --rc genhtml_legend=1 00:22:45.350 --rc geninfo_all_blocks=1 00:22:45.350 --rc geninfo_unexecuted_blocks=1 00:22:45.350 00:22:45.350 ' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:45.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.350 --rc genhtml_branch_coverage=1 00:22:45.350 --rc genhtml_function_coverage=1 00:22:45.350 --rc genhtml_legend=1 00:22:45.350 --rc geninfo_all_blocks=1 00:22:45.350 --rc geninfo_unexecuted_blocks=1 00:22:45.350 00:22:45.350 ' 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:45.350 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.351 16:47:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.919 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.920 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.920 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:22:51.920 00:22:51.920 --- 10.0.0.2 ping statistics --- 00:22:51.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.920 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:22:51.920 00:22:51.920 --- 10.0.0.1 ping statistics --- 00:22:51.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.920 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=609270 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 609270 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 609270 ']' 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.920 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.921 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.921 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.921 16:47:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 [2024-10-14 16:47:55.995631] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:51.921 [2024-10-14 16:47:55.995679] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.921 [2024-10-14 16:47:56.068211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:51.921 [2024-10-14 16:47:56.107286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.921 [2024-10-14 16:47:56.107322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.921 [2024-10-14 16:47:56.107330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.921 [2024-10-14 16:47:56.107335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.921 [2024-10-14 16:47:56.107340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.921 [2024-10-14 16:47:56.108785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.921 [2024-10-14 16:47:56.108890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.921 [2024-10-14 16:47:56.108890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 [2024-10-14 16:47:56.253092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 Malloc0 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 [2024-10-14 16:47:56.317888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 [2024-10-14 16:47:56.325798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 Malloc1 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=609413 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 609413 /var/tmp/bdevperf.sock 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 609413 ']' 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.921 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.181 NVMe0n1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.181 1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.181 request: 00:22:52.181 { 00:22:52.181 "name": "NVMe0", 00:22:52.181 "trtype": "tcp", 00:22:52.181 "traddr": "10.0.0.2", 00:22:52.181 "adrfam": "ipv4", 00:22:52.181 "trsvcid": "4420", 00:22:52.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.181 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:52.181 "hostaddr": "10.0.0.1", 00:22:52.181 "prchk_reftag": false, 00:22:52.181 "prchk_guard": false, 00:22:52.181 "hdgst": false, 00:22:52.181 "ddgst": false, 00:22:52.181 "allow_unrecognized_csi": false, 00:22:52.181 "method": "bdev_nvme_attach_controller", 00:22:52.181 "req_id": 1 00:22:52.181 } 00:22:52.181 Got JSON-RPC error response 00:22:52.181 response: 00:22:52.181 { 00:22:52.181 "code": -114, 00:22:52.181 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:52.181 } 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:52.181 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.182 request: 00:22:52.182 { 00:22:52.182 "name": "NVMe0", 00:22:52.182 "trtype": "tcp", 00:22:52.182 "traddr": "10.0.0.2", 00:22:52.182 "adrfam": "ipv4", 00:22:52.182 "trsvcid": "4420", 00:22:52.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:52.182 "hostaddr": "10.0.0.1", 00:22:52.182 "prchk_reftag": false, 00:22:52.182 "prchk_guard": false, 00:22:52.182 "hdgst": false, 00:22:52.182 "ddgst": false, 00:22:52.182 "allow_unrecognized_csi": false, 00:22:52.182 "method": "bdev_nvme_attach_controller", 00:22:52.182 "req_id": 1 00:22:52.182 } 00:22:52.182 Got JSON-RPC error response 00:22:52.182 response: 00:22:52.182 { 00:22:52.182 "code": -114, 00:22:52.182 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:52.182 } 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.182 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 request: 00:22:52.440 { 00:22:52.440 "name": "NVMe0", 00:22:52.440 "trtype": "tcp", 00:22:52.440 "traddr": "10.0.0.2", 00:22:52.440 "adrfam": "ipv4", 00:22:52.440 "trsvcid": "4420", 00:22:52.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.440 "hostaddr": "10.0.0.1", 00:22:52.440 "prchk_reftag": false, 00:22:52.440 "prchk_guard": false, 00:22:52.440 "hdgst": false, 00:22:52.440 "ddgst": false, 00:22:52.440 "multipath": "disable", 00:22:52.440 "allow_unrecognized_csi": false, 00:22:52.440 "method": "bdev_nvme_attach_controller", 00:22:52.440 "req_id": 1 00:22:52.440 } 00:22:52.440 Got JSON-RPC error response 00:22:52.440 response: 00:22:52.440 { 00:22:52.440 "code": -114, 00:22:52.440 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:52.440 } 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 request: 00:22:52.440 { 00:22:52.440 "name": "NVMe0", 00:22:52.440 "trtype": "tcp", 00:22:52.440 "traddr": "10.0.0.2", 00:22:52.440 "adrfam": "ipv4", 00:22:52.440 "trsvcid": "4420", 00:22:52.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.440 "hostaddr": "10.0.0.1", 00:22:52.440 "prchk_reftag": false, 00:22:52.440 "prchk_guard": false, 00:22:52.440 "hdgst": false, 00:22:52.440 "ddgst": false, 00:22:52.440 "multipath": "failover", 00:22:52.440 "allow_unrecognized_csi": false, 00:22:52.440 "method": "bdev_nvme_attach_controller", 00:22:52.440 "req_id": 1 00:22:52.440 } 00:22:52.440 Got JSON-RPC error response 00:22:52.440 response: 00:22:52.440 { 00:22:52.440 "code": -114, 00:22:52.440 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:52.440 } 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 NVMe0n1 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.440 16:47:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:52.440 16:47:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.816 { 00:22:53.816 "results": [ 00:22:53.816 { 00:22:53.816 "job": "NVMe0n1", 00:22:53.816 "core_mask": "0x1", 00:22:53.816 "workload": "write", 00:22:53.816 "status": "finished", 00:22:53.816 "queue_depth": 128, 00:22:53.816 "io_size": 4096, 00:22:53.816 "runtime": 1.006593, 00:22:53.816 "iops": 25216.745993663775, 00:22:53.816 "mibps": 98.50291403774912, 00:22:53.816 "io_failed": 0, 00:22:53.816 "io_timeout": 0, 00:22:53.816 "avg_latency_us": 5069.489789904379, 00:22:53.816 "min_latency_us": 3027.1390476190477, 00:22:53.816 "max_latency_us": 10985.081904761904 00:22:53.816 } 00:22:53.816 ], 00:22:53.816 "core_count": 1 00:22:53.816 } 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 609413 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 609413 ']' 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 609413 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 609413 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 609413' 00:22:53.816 killing process with pid 609413 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 609413 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 609413 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:53.816 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:53.817 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:53.817 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:53.817 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:53.817 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:53.817 [2024-10-14 16:47:56.428718] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:22:53.817 [2024-10-14 16:47:56.428765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609413 ] 00:22:53.817 [2024-10-14 16:47:56.500607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.817 [2024-10-14 16:47:56.542982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.817 [2024-10-14 16:47:57.040895] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 1d3ca910-ecbc-4f7a-9c35-e7803e60e07b already exists 00:22:53.817 [2024-10-14 16:47:57.040923] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:1d3ca910-ecbc-4f7a-9c35-e7803e60e07b alias for bdev NVMe1n1 00:22:53.817 [2024-10-14 16:47:57.040931] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:53.817 Running I/O for 1 seconds... 00:22:53.817 25190.00 IOPS, 98.40 MiB/s 00:22:53.817 Latency(us) 00:22:53.817 [2024-10-14T14:47:58.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.817 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:53.817 NVMe0n1 : 1.01 25216.75 98.50 0.00 0.00 5069.49 3027.14 10985.08 00:22:53.817 [2024-10-14T14:47:58.451Z] =================================================================================================================== 00:22:53.817 [2024-10-14T14:47:58.451Z] Total : 25216.75 98.50 0.00 0.00 5069.49 3027.14 10985.08 00:22:53.817 Received shutdown signal, test time was about 1.000000 seconds 00:22:53.817 00:22:53.817 Latency(us) 00:22:53.817 [2024-10-14T14:47:58.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.817 [2024-10-14T14:47:58.451Z] =================================================================================================================== 00:22:53.817 [2024-10-14T14:47:58.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.817 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:53.817 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.075 rmmod nvme_tcp 00:22:54.075 rmmod nvme_fabrics 00:22:54.075 rmmod nvme_keyring 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 609270 ']' 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 609270 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 609270 ']' 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 609270 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 609270 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 609270' 00:22:54.075 killing process with pid 609270 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 609270 00:22:54.075 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 609270 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.334 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.335 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.335 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.335 16:47:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.236 16:48:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:56.237 00:22:56.237 real 0m11.098s 00:22:56.237 user 0m11.843s 00:22:56.237 sys 0m5.255s 00:22:56.237 16:48:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:56.237 16:48:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.237 ************************************ 00:22:56.237 END TEST nvmf_multicontroller 00:22:56.237 ************************************ 00:22:56.496 16:48:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:56.496 16:48:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:56.496 16:48:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:56.496 16:48:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.496 ************************************ 00:22:56.496 START TEST nvmf_aer 00:22:56.496 ************************************ 00:22:56.496 16:48:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:56.496 * Looking for test storage... 00:22:56.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.496 --rc genhtml_branch_coverage=1 00:22:56.496 --rc genhtml_function_coverage=1 00:22:56.496 --rc genhtml_legend=1 00:22:56.496 --rc geninfo_all_blocks=1 00:22:56.496 --rc geninfo_unexecuted_blocks=1 00:22:56.496 00:22:56.496 ' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.496 --rc genhtml_branch_coverage=1 00:22:56.496 --rc genhtml_function_coverage=1 00:22:56.496 --rc genhtml_legend=1 00:22:56.496 --rc geninfo_all_blocks=1 00:22:56.496 --rc geninfo_unexecuted_blocks=1 00:22:56.496 00:22:56.496 ' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.496 --rc genhtml_branch_coverage=1 00:22:56.496 --rc genhtml_function_coverage=1 00:22:56.496 --rc genhtml_legend=1 00:22:56.496 --rc geninfo_all_blocks=1 00:22:56.496 --rc geninfo_unexecuted_blocks=1 00:22:56.496 00:22:56.496 ' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:56.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:56.496 --rc genhtml_branch_coverage=1 00:22:56.496 --rc genhtml_function_coverage=1 00:22:56.496 --rc genhtml_legend=1 00:22:56.496 --rc geninfo_all_blocks=1 00:22:56.496 --rc geninfo_unexecuted_blocks=1 00:22:56.496 00:22:56.496 ' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.496 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.497 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.497 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:56.497 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.497 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:56.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.755 16:48:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.344 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:03.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:03.345 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:03.345 Found net devices under 0000:86:00.0: cvl_0_0 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:03.345 Found net devices under 0000:86:00.1: cvl_0_1 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:03.345 16:48:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:03.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:23:03.345 00:23:03.345 --- 10.0.0.2 ping statistics --- 00:23:03.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.345 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:23:03.345 00:23:03.345 --- 10.0.0.1 ping statistics --- 00:23:03.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.345 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=613412 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 613412 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 613412 ']' 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:03.345 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.345 [2024-10-14 16:48:07.149289] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:03.345 [2024-10-14 16:48:07.149336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.345 [2024-10-14 16:48:07.219873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.345 [2024-10-14 16:48:07.263437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.345 [2024-10-14 16:48:07.263475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.345 [2024-10-14 16:48:07.263483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.345 [2024-10-14 16:48:07.263490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.345 [2024-10-14 16:48:07.263495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.345 [2024-10-14 16:48:07.265061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.345 [2024-10-14 16:48:07.265100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.345 [2024-10-14 16:48:07.265211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.345 [2024-10-14 16:48:07.265212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.604 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.604 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:03.604 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:03.604 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:03.604 16:48:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.604 [2024-10-14 16:48:08.032281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.604 Malloc0 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.604 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.605 [2024-10-14 16:48:08.098316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.605 [ 00:23:03.605 { 00:23:03.605 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.605 "subtype": "Discovery", 00:23:03.605 "listen_addresses": [], 00:23:03.605 "allow_any_host": true, 00:23:03.605 "hosts": [] 00:23:03.605 }, 00:23:03.605 { 00:23:03.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.605 "subtype": "NVMe", 00:23:03.605 "listen_addresses": [ 00:23:03.605 { 00:23:03.605 "trtype": "TCP", 00:23:03.605 "adrfam": "IPv4", 00:23:03.605 "traddr": "10.0.0.2", 00:23:03.605 "trsvcid": "4420" 00:23:03.605 } 00:23:03.605 ], 00:23:03.605 "allow_any_host": true, 00:23:03.605 "hosts": [], 00:23:03.605 "serial_number": "SPDK00000000000001", 00:23:03.605 "model_number": "SPDK bdev Controller", 00:23:03.605 "max_namespaces": 2, 00:23:03.605 "min_cntlid": 1, 00:23:03.605 "max_cntlid": 65519, 00:23:03.605 "namespaces": [ 00:23:03.605 { 00:23:03.605 "nsid": 1, 00:23:03.605 "bdev_name": "Malloc0", 00:23:03.605 "name": "Malloc0", 00:23:03.605 "nguid": "6221DD603F4E46F0B367CCAF88BBBC78", 00:23:03.605 "uuid": "6221dd60-3f4e-46f0-b367-ccaf88bbbc78" 00:23:03.605 } 00:23:03.605 ] 00:23:03.605 } 00:23:03.605 ] 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=613661 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:03.605 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.864 Malloc1 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.864 Asynchronous Event Request test 00:23:03.864 Attaching to 10.0.0.2 00:23:03.864 Attached to 10.0.0.2 00:23:03.864 Registering asynchronous event callbacks... 00:23:03.864 Starting namespace attribute notice tests for all controllers... 00:23:03.864 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:03.864 aer_cb - Changed Namespace 00:23:03.864 Cleaning up... 00:23:03.864 [ 00:23:03.864 { 00:23:03.864 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.864 "subtype": "Discovery", 00:23:03.864 "listen_addresses": [], 00:23:03.864 "allow_any_host": true, 00:23:03.864 "hosts": [] 00:23:03.864 }, 00:23:03.864 { 00:23:03.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.864 "subtype": "NVMe", 00:23:03.864 "listen_addresses": [ 00:23:03.864 { 00:23:03.864 "trtype": "TCP", 00:23:03.864 "adrfam": "IPv4", 00:23:03.864 "traddr": "10.0.0.2", 00:23:03.864 "trsvcid": "4420" 00:23:03.864 } 00:23:03.864 ], 00:23:03.864 "allow_any_host": true, 00:23:03.864 "hosts": [], 00:23:03.864 "serial_number": "SPDK00000000000001", 00:23:03.864 "model_number": "SPDK bdev Controller", 00:23:03.864 "max_namespaces": 2, 00:23:03.864 "min_cntlid": 1, 00:23:03.864 "max_cntlid": 65519, 00:23:03.864 "namespaces": [ 00:23:03.864 { 00:23:03.864 "nsid": 1, 00:23:03.864 "bdev_name": "Malloc0", 00:23:03.864 "name": "Malloc0", 00:23:03.864 "nguid": "6221DD603F4E46F0B367CCAF88BBBC78", 00:23:03.864 "uuid": "6221dd60-3f4e-46f0-b367-ccaf88bbbc78" 00:23:03.864 }, 00:23:03.864 { 00:23:03.864 "nsid": 2, 00:23:03.864 "bdev_name": "Malloc1", 00:23:03.864 "name": "Malloc1", 00:23:03.864 "nguid": "67E2AC9CF7334ED08D7D7788CCEFC064", 00:23:03.864 "uuid": "67e2ac9c-f733-4ed0-8d7d-7788ccefc064" 00:23:03.864 } 00:23:03.864 ] 00:23:03.864 } 00:23:03.864 ] 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.864 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 613661 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.865 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.865 rmmod nvme_tcp 00:23:03.865 rmmod nvme_fabrics 00:23:03.865 rmmod nvme_keyring 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 613412 ']' 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 613412 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 613412 ']' 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 613412 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 613412 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 613412' 00:23:04.124 killing process with pid 613412 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 613412 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 613412 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.124 16:48:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.658 00:23:06.658 real 0m9.873s 00:23:06.658 user 0m7.694s 00:23:06.658 sys 0m4.951s 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.658 ************************************ 00:23:06.658 END TEST nvmf_aer 00:23:06.658 ************************************ 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:06.658 16:48:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.658 ************************************ 00:23:06.658 START TEST nvmf_async_init 00:23:06.658 ************************************ 00:23:06.659 16:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:06.659 * Looking for test storage... 00:23:06.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.659 16:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:06.659 16:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:06.659 16:48:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:06.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.659 --rc genhtml_branch_coverage=1 00:23:06.659 --rc genhtml_function_coverage=1 00:23:06.659 --rc genhtml_legend=1 00:23:06.659 --rc geninfo_all_blocks=1 00:23:06.659 --rc geninfo_unexecuted_blocks=1 00:23:06.659 00:23:06.659 ' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:06.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.659 --rc genhtml_branch_coverage=1 00:23:06.659 --rc genhtml_function_coverage=1 00:23:06.659 --rc genhtml_legend=1 00:23:06.659 --rc geninfo_all_blocks=1 00:23:06.659 --rc geninfo_unexecuted_blocks=1 00:23:06.659 00:23:06.659 ' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:06.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.659 --rc genhtml_branch_coverage=1 00:23:06.659 --rc genhtml_function_coverage=1 00:23:06.659 --rc genhtml_legend=1 00:23:06.659 --rc geninfo_all_blocks=1 00:23:06.659 --rc geninfo_unexecuted_blocks=1 00:23:06.659 00:23:06.659 ' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:06.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.659 --rc genhtml_branch_coverage=1 00:23:06.659 --rc genhtml_function_coverage=1 00:23:06.659 --rc genhtml_legend=1 00:23:06.659 --rc geninfo_all_blocks=1 00:23:06.659 --rc geninfo_unexecuted_blocks=1 00:23:06.659 00:23:06.659 ' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=af70db49838647b8a7bb8d70bffac90a 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:06.659 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.660 16:48:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.228 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.228 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.229 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:23:13.229 00:23:13.229 --- 10.0.0.2 ping statistics --- 00:23:13.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.229 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:23:13.229 00:23:13.229 --- 10.0.0.1 ping statistics --- 00:23:13.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.229 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=617576 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 617576 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 617576 ']' 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:13.229 16:48:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 [2024-10-14 16:48:17.039565] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:13.229 [2024-10-14 16:48:17.039613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.229 [2024-10-14 16:48:17.110840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.229 [2024-10-14 16:48:17.151450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.229 [2024-10-14 16:48:17.151483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.229 [2024-10-14 16:48:17.151490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.229 [2024-10-14 16:48:17.151496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.229 [2024-10-14 16:48:17.151501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.229 [2024-10-14 16:48:17.152048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 [2024-10-14 16:48:17.285648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 null0 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g af70db49838647b8a7bb8d70bffac90a 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.229 [2024-10-14 16:48:17.333892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.229 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.230 nvme0n1 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.230 [ 00:23:13.230 { 00:23:13.230 "name": "nvme0n1", 00:23:13.230 "aliases": [ 00:23:13.230 "af70db49-8386-47b8-a7bb-8d70bffac90a" 00:23:13.230 ], 00:23:13.230 "product_name": "NVMe disk", 00:23:13.230 "block_size": 512, 00:23:13.230 "num_blocks": 2097152, 00:23:13.230 "uuid": "af70db49-8386-47b8-a7bb-8d70bffac90a", 00:23:13.230 "numa_id": 1, 00:23:13.230 "assigned_rate_limits": { 00:23:13.230 "rw_ios_per_sec": 0, 00:23:13.230 "rw_mbytes_per_sec": 0, 00:23:13.230 "r_mbytes_per_sec": 0, 00:23:13.230 "w_mbytes_per_sec": 0 00:23:13.230 }, 00:23:13.230 "claimed": false, 00:23:13.230 "zoned": false, 00:23:13.230 "supported_io_types": { 00:23:13.230 "read": true, 00:23:13.230 "write": true, 00:23:13.230 "unmap": false, 00:23:13.230 "flush": true, 00:23:13.230 "reset": true, 00:23:13.230 "nvme_admin": true, 00:23:13.230 "nvme_io": true, 00:23:13.230 "nvme_io_md": false, 00:23:13.230 "write_zeroes": true, 00:23:13.230 "zcopy": false, 00:23:13.230 "get_zone_info": false, 00:23:13.230 "zone_management": false, 00:23:13.230 "zone_append": false, 00:23:13.230 "compare": true, 00:23:13.230 "compare_and_write": true, 00:23:13.230 "abort": true, 00:23:13.230 "seek_hole": false, 00:23:13.230 "seek_data": false, 00:23:13.230 "copy": true, 00:23:13.230 "nvme_iov_md": false 00:23:13.230 }, 00:23:13.230 "memory_domains": [ 00:23:13.230 { 00:23:13.230 "dma_device_id": "system", 00:23:13.230 "dma_device_type": 1 00:23:13.230 } 00:23:13.230 ], 00:23:13.230 "driver_specific": { 00:23:13.230 "nvme": [ 00:23:13.230 { 00:23:13.230 "trid": { 00:23:13.230 "trtype": "TCP", 00:23:13.230 "adrfam": "IPv4", 00:23:13.230 "traddr": "10.0.0.2", 00:23:13.230 "trsvcid": "4420", 00:23:13.230 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.230 }, 00:23:13.230 "ctrlr_data": { 00:23:13.230 "cntlid": 1, 00:23:13.230 "vendor_id": "0x8086", 00:23:13.230 "model_number": "SPDK bdev Controller", 00:23:13.230 "serial_number": "00000000000000000000", 00:23:13.230 "firmware_revision": "25.01", 00:23:13.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.230 "oacs": { 00:23:13.230 "security": 0, 00:23:13.230 "format": 0, 00:23:13.230 "firmware": 0, 00:23:13.230 "ns_manage": 0 00:23:13.230 }, 00:23:13.230 "multi_ctrlr": true, 00:23:13.230 "ana_reporting": false 00:23:13.230 }, 00:23:13.230 "vs": { 00:23:13.230 "nvme_version": "1.3" 00:23:13.230 }, 00:23:13.230 "ns_data": { 00:23:13.230 "id": 1, 00:23:13.230 "can_share": true 00:23:13.230 } 00:23:13.230 } 00:23:13.230 ], 00:23:13.230 "mp_policy": "active_passive" 00:23:13.230 } 00:23:13.230 } 00:23:13.230 ] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.230 [2024-10-14 16:48:17.594412] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:13.230 [2024-10-14 16:48:17.594469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f8060 (9): Bad file descriptor 00:23:13.230 [2024-10-14 16:48:17.726681] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.230 [ 00:23:13.230 { 00:23:13.230 "name": "nvme0n1", 00:23:13.230 "aliases": [ 00:23:13.230 "af70db49-8386-47b8-a7bb-8d70bffac90a" 00:23:13.230 ], 00:23:13.230 "product_name": "NVMe disk", 00:23:13.230 "block_size": 512, 00:23:13.230 "num_blocks": 2097152, 00:23:13.230 "uuid": "af70db49-8386-47b8-a7bb-8d70bffac90a", 00:23:13.230 "numa_id": 1, 00:23:13.230 "assigned_rate_limits": { 00:23:13.230 "rw_ios_per_sec": 0, 00:23:13.230 "rw_mbytes_per_sec": 0, 00:23:13.230 "r_mbytes_per_sec": 0, 00:23:13.230 "w_mbytes_per_sec": 0 00:23:13.230 }, 00:23:13.230 "claimed": false, 00:23:13.230 "zoned": false, 00:23:13.230 "supported_io_types": { 00:23:13.230 "read": true, 00:23:13.230 "write": true, 00:23:13.230 "unmap": false, 00:23:13.230 "flush": true, 00:23:13.230 "reset": true, 00:23:13.230 "nvme_admin": true, 00:23:13.230 "nvme_io": true, 00:23:13.230 "nvme_io_md": false, 00:23:13.230 "write_zeroes": true, 00:23:13.230 "zcopy": false, 00:23:13.230 "get_zone_info": false, 00:23:13.230 "zone_management": false, 00:23:13.230 "zone_append": false, 00:23:13.230 "compare": true, 00:23:13.230 "compare_and_write": true, 00:23:13.230 "abort": true, 00:23:13.230 "seek_hole": false, 00:23:13.230 "seek_data": false, 00:23:13.230 "copy": true, 00:23:13.230 "nvme_iov_md": false 00:23:13.230 }, 00:23:13.230 "memory_domains": [ 00:23:13.230 { 00:23:13.230 "dma_device_id": "system", 00:23:13.230 "dma_device_type": 1 00:23:13.230 } 00:23:13.230 ], 00:23:13.230 "driver_specific": { 00:23:13.230 "nvme": [ 00:23:13.230 { 00:23:13.230 "trid": { 00:23:13.230 "trtype": "TCP", 00:23:13.230 "adrfam": "IPv4", 00:23:13.230 "traddr": "10.0.0.2", 00:23:13.230 "trsvcid": "4420", 00:23:13.230 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.230 }, 00:23:13.230 "ctrlr_data": { 00:23:13.230 "cntlid": 2, 00:23:13.230 "vendor_id": "0x8086", 00:23:13.230 "model_number": "SPDK bdev Controller", 00:23:13.230 "serial_number": "00000000000000000000", 00:23:13.230 "firmware_revision": "25.01", 00:23:13.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.230 "oacs": { 00:23:13.230 "security": 0, 00:23:13.230 "format": 0, 00:23:13.230 "firmware": 0, 00:23:13.230 "ns_manage": 0 00:23:13.230 }, 00:23:13.230 "multi_ctrlr": true, 00:23:13.230 "ana_reporting": false 00:23:13.230 }, 00:23:13.230 "vs": { 00:23:13.230 "nvme_version": "1.3" 00:23:13.230 }, 00:23:13.230 "ns_data": { 00:23:13.230 "id": 1, 00:23:13.230 "can_share": true 00:23:13.230 } 00:23:13.230 } 00:23:13.230 ], 00:23:13.230 "mp_policy": "active_passive" 00:23:13.230 } 00:23:13.230 } 00:23:13.230 ] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WXlr9XhKaX 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WXlr9XhKaX 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.WXlr9XhKaX 00:23:13.230 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.231 [2024-10-14 16:48:17.794999] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.231 [2024-10-14 16:48:17.795097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.231 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.231 [2024-10-14 16:48:17.819076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.489 nvme0n1 00:23:13.489 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.490 [ 00:23:13.490 { 00:23:13.490 "name": "nvme0n1", 00:23:13.490 "aliases": [ 00:23:13.490 "af70db49-8386-47b8-a7bb-8d70bffac90a" 00:23:13.490 ], 00:23:13.490 "product_name": "NVMe disk", 00:23:13.490 "block_size": 512, 00:23:13.490 "num_blocks": 2097152, 00:23:13.490 "uuid": "af70db49-8386-47b8-a7bb-8d70bffac90a", 00:23:13.490 "numa_id": 1, 00:23:13.490 "assigned_rate_limits": { 00:23:13.490 "rw_ios_per_sec": 0, 00:23:13.490 "rw_mbytes_per_sec": 0, 00:23:13.490 "r_mbytes_per_sec": 0, 00:23:13.490 "w_mbytes_per_sec": 0 00:23:13.490 }, 00:23:13.490 "claimed": false, 00:23:13.490 "zoned": false, 00:23:13.490 "supported_io_types": { 00:23:13.490 "read": true, 00:23:13.490 "write": true, 00:23:13.490 "unmap": false, 00:23:13.490 "flush": true, 00:23:13.490 "reset": true, 00:23:13.490 "nvme_admin": true, 00:23:13.490 "nvme_io": true, 00:23:13.490 "nvme_io_md": false, 00:23:13.490 "write_zeroes": true, 00:23:13.490 "zcopy": false, 00:23:13.490 "get_zone_info": false, 00:23:13.490 "zone_management": false, 00:23:13.490 "zone_append": false, 00:23:13.490 "compare": true, 00:23:13.490 "compare_and_write": true, 00:23:13.490 "abort": true, 00:23:13.490 "seek_hole": false, 00:23:13.490 "seek_data": false, 00:23:13.490 "copy": true, 00:23:13.490 "nvme_iov_md": false 00:23:13.490 }, 00:23:13.490 "memory_domains": [ 00:23:13.490 { 00:23:13.490 "dma_device_id": "system", 00:23:13.490 "dma_device_type": 1 00:23:13.490 } 00:23:13.490 ], 00:23:13.490 "driver_specific": { 00:23:13.490 "nvme": [ 00:23:13.490 { 00:23:13.490 "trid": { 00:23:13.490 "trtype": "TCP", 00:23:13.490 "adrfam": "IPv4", 00:23:13.490 "traddr": "10.0.0.2", 00:23:13.490 "trsvcid": "4421", 00:23:13.490 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:13.490 }, 00:23:13.490 "ctrlr_data": { 00:23:13.490 "cntlid": 3, 00:23:13.490 "vendor_id": "0x8086", 00:23:13.490 "model_number": "SPDK bdev Controller", 00:23:13.490 "serial_number": "00000000000000000000", 00:23:13.490 "firmware_revision": "25.01", 00:23:13.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.490 "oacs": { 00:23:13.490 "security": 0, 00:23:13.490 "format": 0, 00:23:13.490 "firmware": 0, 00:23:13.490 "ns_manage": 0 00:23:13.490 }, 00:23:13.490 "multi_ctrlr": true, 00:23:13.490 "ana_reporting": false 00:23:13.490 }, 00:23:13.490 "vs": { 00:23:13.490 "nvme_version": "1.3" 00:23:13.490 }, 00:23:13.490 "ns_data": { 00:23:13.490 "id": 1, 00:23:13.490 "can_share": true 00:23:13.490 } 00:23:13.490 } 00:23:13.490 ], 00:23:13.490 "mp_policy": "active_passive" 00:23:13.490 } 00:23:13.490 } 00:23:13.490 ] 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.WXlr9XhKaX 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.490 rmmod nvme_tcp 00:23:13.490 rmmod nvme_fabrics 00:23:13.490 rmmod nvme_keyring 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 617576 ']' 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 617576 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 617576 ']' 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 617576 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.490 16:48:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 617576 00:23:13.490 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.490 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.490 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 617576' 00:23:13.490 killing process with pid 617576 00:23:13.490 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 617576 00:23:13.490 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 617576 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.750 16:48:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.653 16:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.653 00:23:15.653 real 0m9.384s 00:23:15.653 user 0m3.028s 00:23:15.653 sys 0m4.774s 00:23:15.653 16:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.653 16:48:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:15.653 ************************************ 00:23:15.653 END TEST nvmf_async_init 00:23:15.653 ************************************ 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.912 ************************************ 00:23:15.912 START TEST dma 00:23:15.912 ************************************ 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:15.912 * Looking for test storage... 00:23:15.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:15.912 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.913 --rc genhtml_branch_coverage=1 00:23:15.913 --rc genhtml_function_coverage=1 00:23:15.913 --rc genhtml_legend=1 00:23:15.913 --rc geninfo_all_blocks=1 00:23:15.913 --rc geninfo_unexecuted_blocks=1 00:23:15.913 00:23:15.913 ' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.913 --rc genhtml_branch_coverage=1 00:23:15.913 --rc genhtml_function_coverage=1 00:23:15.913 --rc genhtml_legend=1 00:23:15.913 --rc geninfo_all_blocks=1 00:23:15.913 --rc geninfo_unexecuted_blocks=1 00:23:15.913 00:23:15.913 ' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.913 --rc genhtml_branch_coverage=1 00:23:15.913 --rc genhtml_function_coverage=1 00:23:15.913 --rc genhtml_legend=1 00:23:15.913 --rc geninfo_all_blocks=1 00:23:15.913 --rc geninfo_unexecuted_blocks=1 00:23:15.913 00:23:15.913 ' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.913 --rc genhtml_branch_coverage=1 00:23:15.913 --rc genhtml_function_coverage=1 00:23:15.913 --rc genhtml_legend=1 00:23:15.913 --rc geninfo_all_blocks=1 00:23:15.913 --rc geninfo_unexecuted_blocks=1 00:23:15.913 00:23:15.913 ' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:15.913 00:23:15.913 real 0m0.206s 00:23:15.913 user 0m0.128s 00:23:15.913 sys 0m0.092s 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.913 16:48:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:15.913 ************************************ 00:23:15.913 END TEST dma 00:23:15.913 ************************************ 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.172 ************************************ 00:23:16.172 START TEST nvmf_identify 00:23:16.172 ************************************ 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:16.172 * Looking for test storage... 00:23:16.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:16.172 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.173 --rc genhtml_branch_coverage=1 00:23:16.173 --rc genhtml_function_coverage=1 00:23:16.173 --rc genhtml_legend=1 00:23:16.173 --rc geninfo_all_blocks=1 00:23:16.173 --rc geninfo_unexecuted_blocks=1 00:23:16.173 00:23:16.173 ' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.173 --rc genhtml_branch_coverage=1 00:23:16.173 --rc genhtml_function_coverage=1 00:23:16.173 --rc genhtml_legend=1 00:23:16.173 --rc geninfo_all_blocks=1 00:23:16.173 --rc geninfo_unexecuted_blocks=1 00:23:16.173 00:23:16.173 ' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.173 --rc genhtml_branch_coverage=1 00:23:16.173 --rc genhtml_function_coverage=1 00:23:16.173 --rc genhtml_legend=1 00:23:16.173 --rc geninfo_all_blocks=1 00:23:16.173 --rc geninfo_unexecuted_blocks=1 00:23:16.173 00:23:16.173 ' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:16.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:16.173 --rc genhtml_branch_coverage=1 00:23:16.173 --rc genhtml_function_coverage=1 00:23:16.173 --rc genhtml_legend=1 00:23:16.173 --rc geninfo_all_blocks=1 00:23:16.173 --rc geninfo_unexecuted_blocks=1 00:23:16.173 00:23:16.173 ' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:16.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:16.173 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:16.432 16:48:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:23.004 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:23.004 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:23.004 Found net devices under 0000:86:00.0: cvl_0_0 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:23.004 Found net devices under 0000:86:00.1: cvl_0_1 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.004 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:23.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:23:23.005 00:23:23.005 --- 10.0.0.2 ping statistics --- 00:23:23.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.005 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:23.005 00:23:23.005 --- 10.0.0.1 ping statistics --- 00:23:23.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.005 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=621391 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 621391 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 621391 ']' 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.005 16:48:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 [2024-10-14 16:48:26.827621] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:23.005 [2024-10-14 16:48:26.827664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.005 [2024-10-14 16:48:26.900252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:23.005 [2024-10-14 16:48:26.943582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.005 [2024-10-14 16:48:26.943619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.005 [2024-10-14 16:48:26.943627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.005 [2024-10-14 16:48:26.943634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.005 [2024-10-14 16:48:26.943638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.005 [2024-10-14 16:48:26.945191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.005 [2024-10-14 16:48:26.945301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.005 [2024-10-14 16:48:26.945410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.005 [2024-10-14 16:48:26.945410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 [2024-10-14 16:48:27.046655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 Malloc0 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 [2024-10-14 16:48:27.142362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.005 [ 00:23:23.005 { 00:23:23.005 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:23.005 "subtype": "Discovery", 00:23:23.005 "listen_addresses": [ 00:23:23.005 { 00:23:23.005 "trtype": "TCP", 00:23:23.005 "adrfam": "IPv4", 00:23:23.005 "traddr": "10.0.0.2", 00:23:23.005 "trsvcid": "4420" 00:23:23.005 } 00:23:23.005 ], 00:23:23.005 "allow_any_host": true, 00:23:23.005 "hosts": [] 00:23:23.005 }, 00:23:23.005 { 00:23:23.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.005 "subtype": "NVMe", 00:23:23.005 "listen_addresses": [ 00:23:23.005 { 00:23:23.005 "trtype": "TCP", 00:23:23.005 "adrfam": "IPv4", 00:23:23.005 "traddr": "10.0.0.2", 00:23:23.005 "trsvcid": "4420" 00:23:23.005 } 00:23:23.005 ], 00:23:23.005 "allow_any_host": true, 00:23:23.005 "hosts": [], 00:23:23.005 "serial_number": "SPDK00000000000001", 00:23:23.005 "model_number": "SPDK bdev Controller", 00:23:23.005 "max_namespaces": 32, 00:23:23.005 "min_cntlid": 1, 00:23:23.005 "max_cntlid": 65519, 00:23:23.005 "namespaces": [ 00:23:23.005 { 00:23:23.005 "nsid": 1, 00:23:23.005 "bdev_name": "Malloc0", 00:23:23.005 "name": "Malloc0", 00:23:23.005 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:23.005 "eui64": "ABCDEF0123456789", 00:23:23.005 "uuid": "041b0935-1fdd-4f1b-b95a-aa40f345966c" 00:23:23.005 } 00:23:23.005 ] 00:23:23.005 } 00:23:23.005 ] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.005 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:23.005 [2024-10-14 16:48:27.194020] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:23.005 [2024-10-14 16:48:27.194054] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621420 ] 00:23:23.005 [2024-10-14 16:48:27.220291] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:23.005 [2024-10-14 16:48:27.220334] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:23.005 [2024-10-14 16:48:27.220338] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:23.005 [2024-10-14 16:48:27.220351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:23.005 [2024-10-14 16:48:27.220359] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:23.006 [2024-10-14 16:48:27.223898] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:23.006 [2024-10-14 16:48:27.223933] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd7c760 0 00:23:23.006 [2024-10-14 16:48:27.231616] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:23.006 [2024-10-14 16:48:27.231629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:23.006 [2024-10-14 16:48:27.231634] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:23.006 [2024-10-14 16:48:27.231637] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:23.006 [2024-10-14 16:48:27.231667] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.231672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.231676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.231688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:23.006 [2024-10-14 16:48:27.231705] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.239612] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.239620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.239623] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.239639] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:23.006 [2024-10-14 16:48:27.239648] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:23.006 [2024-10-14 16:48:27.239653] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:23.006 [2024-10-14 16:48:27.239666] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239673] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.239680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.239693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.239857] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.239863] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.239866] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239869] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.239874] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:23.006 [2024-10-14 16:48:27.239880] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:23.006 [2024-10-14 16:48:27.239887] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.239899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.239909] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.239969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.239975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.239978] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.239981] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.239985] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:23.006 [2024-10-14 16:48:27.239993] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:23.006 [2024-10-14 16:48:27.239999] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240005] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.240011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.240020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.240083] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.240088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.240091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.240099] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:23.006 [2024-10-14 16:48:27.240109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240116] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.240121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.240131] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.240194] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.240200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.240203] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240206] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.240210] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:23.006 [2024-10-14 16:48:27.240214] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:23.006 [2024-10-14 16:48:27.240221] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:23.006 [2024-10-14 16:48:27.240326] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:23.006 [2024-10-14 16:48:27.240331] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:23.006 [2024-10-14 16:48:27.240339] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240342] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240345] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.240351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.240360] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.240427] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.240432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.240435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.240443] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:23.006 [2024-10-14 16:48:27.240451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240454] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240457] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.240463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.240471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.240531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.240537] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.240540] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240543] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.240549] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:23.006 [2024-10-14 16:48:27.240553] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:23.006 [2024-10-14 16:48:27.240560] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:23.006 [2024-10-14 16:48:27.240567] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:23.006 [2024-10-14 16:48:27.240575] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240579] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.006 [2024-10-14 16:48:27.240584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.006 [2024-10-14 16:48:27.240594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.006 [2024-10-14 16:48:27.240693] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.006 [2024-10-14 16:48:27.240700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.006 [2024-10-14 16:48:27.240703] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240707] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd7c760): datao=0, datal=4096, cccid=0 00:23:23.006 [2024-10-14 16:48:27.240711] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xddc480) on tqpair(0xd7c760): expected_datao=0, payload_size=4096 00:23:23.006 [2024-10-14 16:48:27.240715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240722] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240725] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240738] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.006 [2024-10-14 16:48:27.240743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.006 [2024-10-14 16:48:27.240746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.006 [2024-10-14 16:48:27.240749] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.006 [2024-10-14 16:48:27.240756] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:23.006 [2024-10-14 16:48:27.240760] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:23.007 [2024-10-14 16:48:27.240764] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:23.007 [2024-10-14 16:48:27.240769] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:23.007 [2024-10-14 16:48:27.240773] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:23.007 [2024-10-14 16:48:27.240777] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:23.007 [2024-10-14 16:48:27.240785] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:23.007 [2024-10-14 16:48:27.240794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240797] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240800] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.240806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:23.007 [2024-10-14 16:48:27.240819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.007 [2024-10-14 16:48:27.240890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.007 [2024-10-14 16:48:27.240896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.007 [2024-10-14 16:48:27.240899] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240902] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.007 [2024-10-14 16:48:27.240909] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240912] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.240920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.007 [2024-10-14 16:48:27.240926] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240929] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.240937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.007 [2024-10-14 16:48:27.240942] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240948] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.240953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.007 [2024-10-14 16:48:27.240958] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.240969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.007 [2024-10-14 16:48:27.240973] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:23.007 [2024-10-14 16:48:27.240983] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:23.007 [2024-10-14 16:48:27.240989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.240992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.240997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.007 [2024-10-14 16:48:27.241008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc480, cid 0, qid 0 00:23:23.007 [2024-10-14 16:48:27.241013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc600, cid 1, qid 0 00:23:23.007 [2024-10-14 16:48:27.241017] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc780, cid 2, qid 0 00:23:23.007 [2024-10-14 16:48:27.241021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.007 [2024-10-14 16:48:27.241025] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddca80, cid 4, qid 0 00:23:23.007 [2024-10-14 16:48:27.241123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.007 [2024-10-14 16:48:27.241129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.007 [2024-10-14 16:48:27.241132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.241135] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddca80) on tqpair=0xd7c760 00:23:23.007 [2024-10-14 16:48:27.241141] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:23.007 [2024-10-14 16:48:27.241146] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:23.007 [2024-10-14 16:48:27.241155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.241158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.241164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.007 [2024-10-14 16:48:27.241173] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddca80, cid 4, qid 0 00:23:23.007 [2024-10-14 16:48:27.241249] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.007 [2024-10-14 16:48:27.241255] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.007 [2024-10-14 16:48:27.241258] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.241261] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd7c760): datao=0, datal=4096, cccid=4 00:23:23.007 [2024-10-14 16:48:27.241265] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xddca80) on tqpair(0xd7c760): expected_datao=0, payload_size=4096 00:23:23.007 [2024-10-14 16:48:27.241268] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.241274] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.241277] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.007 [2024-10-14 16:48:27.281754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.007 [2024-10-14 16:48:27.281757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddca80) on tqpair=0xd7c760 00:23:23.007 [2024-10-14 16:48:27.281773] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:23.007 [2024-10-14 16:48:27.281795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.281806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.007 [2024-10-14 16:48:27.281812] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281816] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.281824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.007 [2024-10-14 16:48:27.281837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddca80, cid 4, qid 0 00:23:23.007 [2024-10-14 16:48:27.281842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddcc00, cid 5, qid 0 00:23:23.007 [2024-10-14 16:48:27.281940] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.007 [2024-10-14 16:48:27.281946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.007 [2024-10-14 16:48:27.281949] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281952] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd7c760): datao=0, datal=1024, cccid=4 00:23:23.007 [2024-10-14 16:48:27.281956] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xddca80) on tqpair(0xd7c760): expected_datao=0, payload_size=1024 00:23:23.007 [2024-10-14 16:48:27.281960] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281968] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281971] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.007 [2024-10-14 16:48:27.281981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.007 [2024-10-14 16:48:27.281984] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.281987] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddcc00) on tqpair=0xd7c760 00:23:23.007 [2024-10-14 16:48:27.326606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.007 [2024-10-14 16:48:27.326629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.007 [2024-10-14 16:48:27.326633] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddca80) on tqpair=0xd7c760 00:23:23.007 [2024-10-14 16:48:27.326654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.326665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.007 [2024-10-14 16:48:27.326683] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddca80, cid 4, qid 0 00:23:23.007 [2024-10-14 16:48:27.326831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.007 [2024-10-14 16:48:27.326836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.007 [2024-10-14 16:48:27.326840] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326843] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd7c760): datao=0, datal=3072, cccid=4 00:23:23.007 [2024-10-14 16:48:27.326847] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xddca80) on tqpair(0xd7c760): expected_datao=0, payload_size=3072 00:23:23.007 [2024-10-14 16:48:27.326851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326871] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326874] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326909] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.007 [2024-10-14 16:48:27.326915] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.007 [2024-10-14 16:48:27.326918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326921] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddca80) on tqpair=0xd7c760 00:23:23.007 [2024-10-14 16:48:27.326928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.007 [2024-10-14 16:48:27.326932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd7c760) 00:23:23.007 [2024-10-14 16:48:27.326937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.007 [2024-10-14 16:48:27.326951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddca80, cid 4, qid 0 00:23:23.008 [2024-10-14 16:48:27.327027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.008 [2024-10-14 16:48:27.327032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.008 [2024-10-14 16:48:27.327035] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.008 [2024-10-14 16:48:27.327038] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd7c760): datao=0, datal=8, cccid=4 00:23:23.008 [2024-10-14 16:48:27.327042] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xddca80) on tqpair(0xd7c760): expected_datao=0, payload_size=8 00:23:23.008 [2024-10-14 16:48:27.327046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.008 [2024-10-14 16:48:27.327051] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.008 [2024-10-14 16:48:27.327057] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.008 [2024-10-14 16:48:27.367741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.008 [2024-10-14 16:48:27.367750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.008 [2024-10-14 16:48:27.367754] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.008 [2024-10-14 16:48:27.367757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddca80) on tqpair=0xd7c760 00:23:23.008 ===================================================== 00:23:23.008 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:23.008 ===================================================== 00:23:23.008 Controller Capabilities/Features 00:23:23.008 ================================ 00:23:23.008 Vendor ID: 0000 00:23:23.008 Subsystem Vendor ID: 0000 00:23:23.008 Serial Number: .................... 00:23:23.008 Model Number: ........................................ 00:23:23.008 Firmware Version: 25.01 00:23:23.008 Recommended Arb Burst: 0 00:23:23.008 IEEE OUI Identifier: 00 00 00 00:23:23.008 Multi-path I/O 00:23:23.008 May have multiple subsystem ports: No 00:23:23.008 May have multiple controllers: No 00:23:23.008 Associated with SR-IOV VF: No 00:23:23.008 Max Data Transfer Size: 131072 00:23:23.008 Max Number of Namespaces: 0 00:23:23.008 Max Number of I/O Queues: 1024 00:23:23.008 NVMe Specification Version (VS): 1.3 00:23:23.008 NVMe Specification Version (Identify): 1.3 00:23:23.008 Maximum Queue Entries: 128 00:23:23.008 Contiguous Queues Required: Yes 00:23:23.008 Arbitration Mechanisms Supported 00:23:23.008 Weighted Round Robin: Not Supported 00:23:23.008 Vendor Specific: Not Supported 00:23:23.008 Reset Timeout: 15000 ms 00:23:23.008 Doorbell Stride: 4 bytes 00:23:23.008 NVM Subsystem Reset: Not Supported 00:23:23.008 Command Sets Supported 00:23:23.008 NVM Command Set: Supported 00:23:23.008 Boot Partition: Not Supported 00:23:23.008 Memory Page Size Minimum: 4096 bytes 00:23:23.008 Memory Page Size Maximum: 4096 bytes 00:23:23.008 Persistent Memory Region: Not Supported 00:23:23.008 Optional Asynchronous Events Supported 00:23:23.008 Namespace Attribute Notices: Not Supported 00:23:23.008 Firmware Activation Notices: Not Supported 00:23:23.008 ANA Change Notices: Not Supported 00:23:23.008 PLE Aggregate Log Change Notices: Not Supported 00:23:23.008 LBA Status Info Alert Notices: Not Supported 00:23:23.008 EGE Aggregate Log Change Notices: Not Supported 00:23:23.008 Normal NVM Subsystem Shutdown event: Not Supported 00:23:23.008 Zone Descriptor Change Notices: Not Supported 00:23:23.008 Discovery Log Change Notices: Supported 00:23:23.008 Controller Attributes 00:23:23.008 128-bit Host Identifier: Not Supported 00:23:23.008 Non-Operational Permissive Mode: Not Supported 00:23:23.008 NVM Sets: Not Supported 00:23:23.008 Read Recovery Levels: Not Supported 00:23:23.008 Endurance Groups: Not Supported 00:23:23.008 Predictable Latency Mode: Not Supported 00:23:23.008 Traffic Based Keep ALive: Not Supported 00:23:23.008 Namespace Granularity: Not Supported 00:23:23.008 SQ Associations: Not Supported 00:23:23.008 UUID List: Not Supported 00:23:23.008 Multi-Domain Subsystem: Not Supported 00:23:23.008 Fixed Capacity Management: Not Supported 00:23:23.008 Variable Capacity Management: Not Supported 00:23:23.008 Delete Endurance Group: Not Supported 00:23:23.008 Delete NVM Set: Not Supported 00:23:23.008 Extended LBA Formats Supported: Not Supported 00:23:23.008 Flexible Data Placement Supported: Not Supported 00:23:23.008 00:23:23.008 Controller Memory Buffer Support 00:23:23.008 ================================ 00:23:23.008 Supported: No 00:23:23.008 00:23:23.008 Persistent Memory Region Support 00:23:23.008 ================================ 00:23:23.008 Supported: No 00:23:23.008 00:23:23.008 Admin Command Set Attributes 00:23:23.008 ============================ 00:23:23.008 Security Send/Receive: Not Supported 00:23:23.008 Format NVM: Not Supported 00:23:23.008 Firmware Activate/Download: Not Supported 00:23:23.008 Namespace Management: Not Supported 00:23:23.008 Device Self-Test: Not Supported 00:23:23.008 Directives: Not Supported 00:23:23.008 NVMe-MI: Not Supported 00:23:23.008 Virtualization Management: Not Supported 00:23:23.008 Doorbell Buffer Config: Not Supported 00:23:23.008 Get LBA Status Capability: Not Supported 00:23:23.008 Command & Feature Lockdown Capability: Not Supported 00:23:23.008 Abort Command Limit: 1 00:23:23.008 Async Event Request Limit: 4 00:23:23.008 Number of Firmware Slots: N/A 00:23:23.008 Firmware Slot 1 Read-Only: N/A 00:23:23.008 Firmware Activation Without Reset: N/A 00:23:23.008 Multiple Update Detection Support: N/A 00:23:23.008 Firmware Update Granularity: No Information Provided 00:23:23.008 Per-Namespace SMART Log: No 00:23:23.008 Asymmetric Namespace Access Log Page: Not Supported 00:23:23.008 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:23.008 Command Effects Log Page: Not Supported 00:23:23.008 Get Log Page Extended Data: Supported 00:23:23.008 Telemetry Log Pages: Not Supported 00:23:23.008 Persistent Event Log Pages: Not Supported 00:23:23.008 Supported Log Pages Log Page: May Support 00:23:23.008 Commands Supported & Effects Log Page: Not Supported 00:23:23.008 Feature Identifiers & Effects Log Page:May Support 00:23:23.008 NVMe-MI Commands & Effects Log Page: May Support 00:23:23.008 Data Area 4 for Telemetry Log: Not Supported 00:23:23.008 Error Log Page Entries Supported: 128 00:23:23.008 Keep Alive: Not Supported 00:23:23.008 00:23:23.008 NVM Command Set Attributes 00:23:23.008 ========================== 00:23:23.008 Submission Queue Entry Size 00:23:23.008 Max: 1 00:23:23.008 Min: 1 00:23:23.008 Completion Queue Entry Size 00:23:23.008 Max: 1 00:23:23.008 Min: 1 00:23:23.008 Number of Namespaces: 0 00:23:23.008 Compare Command: Not Supported 00:23:23.008 Write Uncorrectable Command: Not Supported 00:23:23.008 Dataset Management Command: Not Supported 00:23:23.008 Write Zeroes Command: Not Supported 00:23:23.008 Set Features Save Field: Not Supported 00:23:23.008 Reservations: Not Supported 00:23:23.008 Timestamp: Not Supported 00:23:23.008 Copy: Not Supported 00:23:23.008 Volatile Write Cache: Not Present 00:23:23.008 Atomic Write Unit (Normal): 1 00:23:23.008 Atomic Write Unit (PFail): 1 00:23:23.008 Atomic Compare & Write Unit: 1 00:23:23.008 Fused Compare & Write: Supported 00:23:23.008 Scatter-Gather List 00:23:23.008 SGL Command Set: Supported 00:23:23.008 SGL Keyed: Supported 00:23:23.008 SGL Bit Bucket Descriptor: Not Supported 00:23:23.008 SGL Metadata Pointer: Not Supported 00:23:23.008 Oversized SGL: Not Supported 00:23:23.008 SGL Metadata Address: Not Supported 00:23:23.008 SGL Offset: Supported 00:23:23.008 Transport SGL Data Block: Not Supported 00:23:23.008 Replay Protected Memory Block: Not Supported 00:23:23.008 00:23:23.008 Firmware Slot Information 00:23:23.008 ========================= 00:23:23.008 Active slot: 0 00:23:23.008 00:23:23.008 00:23:23.008 Error Log 00:23:23.008 ========= 00:23:23.008 00:23:23.008 Active Namespaces 00:23:23.008 ================= 00:23:23.008 Discovery Log Page 00:23:23.008 ================== 00:23:23.008 Generation Counter: 2 00:23:23.009 Number of Records: 2 00:23:23.009 Record Format: 0 00:23:23.009 00:23:23.009 Discovery Log Entry 0 00:23:23.009 ---------------------- 00:23:23.009 Transport Type: 3 (TCP) 00:23:23.009 Address Family: 1 (IPv4) 00:23:23.009 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:23.009 Entry Flags: 00:23:23.009 Duplicate Returned Information: 1 00:23:23.009 Explicit Persistent Connection Support for Discovery: 1 00:23:23.009 Transport Requirements: 00:23:23.009 Secure Channel: Not Required 00:23:23.009 Port ID: 0 (0x0000) 00:23:23.009 Controller ID: 65535 (0xffff) 00:23:23.009 Admin Max SQ Size: 128 00:23:23.009 Transport Service Identifier: 4420 00:23:23.009 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:23.009 Transport Address: 10.0.0.2 00:23:23.009 Discovery Log Entry 1 00:23:23.009 ---------------------- 00:23:23.009 Transport Type: 3 (TCP) 00:23:23.009 Address Family: 1 (IPv4) 00:23:23.009 Subsystem Type: 2 (NVM Subsystem) 00:23:23.009 Entry Flags: 00:23:23.009 Duplicate Returned Information: 0 00:23:23.009 Explicit Persistent Connection Support for Discovery: 0 00:23:23.009 Transport Requirements: 00:23:23.009 Secure Channel: Not Required 00:23:23.009 Port ID: 0 (0x0000) 00:23:23.009 Controller ID: 65535 (0xffff) 00:23:23.009 Admin Max SQ Size: 128 00:23:23.009 Transport Service Identifier: 4420 00:23:23.009 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:23.009 Transport Address: 10.0.0.2 [2024-10-14 16:48:27.367831] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:23.009 [2024-10-14 16:48:27.367842] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc480) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.367848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.009 [2024-10-14 16:48:27.367853] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc600) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.367857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.009 [2024-10-14 16:48:27.367861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc780) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.367865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.009 [2024-10-14 16:48:27.367869] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.367873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.009 [2024-10-14 16:48:27.367881] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.367884] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.367887] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.367894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.367907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.367965] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.367971] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.367974] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.367977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.367983] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.367987] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.367990] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.367996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368088] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368091] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368095] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:23.009 [2024-10-14 16:48:27.368099] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:23.009 [2024-10-14 16:48:27.368109] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368116] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368131] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368203] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368217] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368221] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368224] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368297] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368303] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368306] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368309] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368321] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368324] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368407] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368419] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368422] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368505] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368508] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368519] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368527] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368542] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368615] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368619] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368627] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368633] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368648] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.009 [2024-10-14 16:48:27.368719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.009 [2024-10-14 16:48:27.368724] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.009 [2024-10-14 16:48:27.368727] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.009 [2024-10-14 16:48:27.368738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.009 [2024-10-14 16:48:27.368745] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.009 [2024-10-14 16:48:27.368750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.009 [2024-10-14 16:48:27.368760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.368834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.368840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.368842] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.368846] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.368854] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.368857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.368860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.368866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.368875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.368939] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.368944] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.368947] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.368950] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.368959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.368963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.368966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.368973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.368982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369040] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369052] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369063] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369081] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369167] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369170] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369178] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369182] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369185] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369277] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369300] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369303] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369318] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369396] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369401] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369404] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369408] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369417] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369420] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369423] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369522] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369530] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369536] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369551] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369621] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369629] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369632] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369650] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369709] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369714] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369717] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369720] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369728] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369732] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369735] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369749] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369813] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369819] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369822] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369825] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369833] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369837] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369840] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.369920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.369925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.369928] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369931] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.369939] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369943] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.369946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.369951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.369960] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.370038] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.370044] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.370047] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.370050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.370057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.370061] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.370064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.010 [2024-10-14 16:48:27.370069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.010 [2024-10-14 16:48:27.370078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.010 [2024-10-14 16:48:27.370136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.010 [2024-10-14 16:48:27.370142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.010 [2024-10-14 16:48:27.370144] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.370148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.010 [2024-10-14 16:48:27.370155] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.010 [2024-10-14 16:48:27.370159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370162] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.011 [2024-10-14 16:48:27.370167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.370177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.011 [2024-10-14 16:48:27.370237] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.370243] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.370245] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370249] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.011 [2024-10-14 16:48:27.370257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.011 [2024-10-14 16:48:27.370268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.370277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.011 [2024-10-14 16:48:27.370337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.370343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.370346] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370349] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.011 [2024-10-14 16:48:27.370357] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370360] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370363] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.011 [2024-10-14 16:48:27.370369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.370378] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.011 [2024-10-14 16:48:27.370438] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.370444] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.370447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.011 [2024-10-14 16:48:27.370458] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370465] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.011 [2024-10-14 16:48:27.370470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.370479] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.011 [2024-10-14 16:48:27.370555] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.370561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.370564] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370567] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.011 [2024-10-14 16:48:27.370575] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.370581] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.011 [2024-10-14 16:48:27.370587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.370595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.011 [2024-10-14 16:48:27.374607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.374614] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.374617] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.374621] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.011 [2024-10-14 16:48:27.374630] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.374634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.374637] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd7c760) 00:23:23.011 [2024-10-14 16:48:27.374642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.374653] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xddc900, cid 3, qid 0 00:23:23.011 [2024-10-14 16:48:27.374805] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.374810] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.374815] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.374818] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xddc900) on tqpair=0xd7c760 00:23:23.011 [2024-10-14 16:48:27.374825] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:23.011 00:23:23.011 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:23.011 [2024-10-14 16:48:27.413744] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:23.011 [2024-10-14 16:48:27.413791] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621422 ] 00:23:23.011 [2024-10-14 16:48:27.437599] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:23.011 [2024-10-14 16:48:27.441646] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:23.011 [2024-10-14 16:48:27.441651] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:23.011 [2024-10-14 16:48:27.441663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:23.011 [2024-10-14 16:48:27.441669] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:23.011 [2024-10-14 16:48:27.442082] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:23.011 [2024-10-14 16:48:27.442110] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa4c760 0 00:23:23.011 [2024-10-14 16:48:27.456611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:23.011 [2024-10-14 16:48:27.456624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:23.011 [2024-10-14 16:48:27.456628] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:23.011 [2024-10-14 16:48:27.456631] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:23.011 [2024-10-14 16:48:27.456654] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.456659] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.456663] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.011 [2024-10-14 16:48:27.456672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:23.011 [2024-10-14 16:48:27.456689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.011 [2024-10-14 16:48:27.464611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.464619] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.464622] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464626] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.011 [2024-10-14 16:48:27.464636] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:23.011 [2024-10-14 16:48:27.464643] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:23.011 [2024-10-14 16:48:27.464647] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:23.011 [2024-10-14 16:48:27.464657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.011 [2024-10-14 16:48:27.464673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.464686] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.011 [2024-10-14 16:48:27.464855] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.464861] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.464865] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464868] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.011 [2024-10-14 16:48:27.464872] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:23.011 [2024-10-14 16:48:27.464878] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:23.011 [2024-10-14 16:48:27.464884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464888] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464891] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.011 [2024-10-14 16:48:27.464897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.011 [2024-10-14 16:48:27.464907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.011 [2024-10-14 16:48:27.464972] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.011 [2024-10-14 16:48:27.464978] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.011 [2024-10-14 16:48:27.464981] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.464984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.011 [2024-10-14 16:48:27.464989] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:23.011 [2024-10-14 16:48:27.464996] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:23.011 [2024-10-14 16:48:27.465001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.465005] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.011 [2024-10-14 16:48:27.465008] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.011 [2024-10-14 16:48:27.465013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.012 [2024-10-14 16:48:27.465023] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.465082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.012 [2024-10-14 16:48:27.465088] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.012 [2024-10-14 16:48:27.465091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465094] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.012 [2024-10-14 16:48:27.465099] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:23.012 [2024-10-14 16:48:27.465107] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465113] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.465119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.012 [2024-10-14 16:48:27.465130] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.465193] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.012 [2024-10-14 16:48:27.465198] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.012 [2024-10-14 16:48:27.465201] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.012 [2024-10-14 16:48:27.465209] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:23.012 [2024-10-14 16:48:27.465213] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:23.012 [2024-10-14 16:48:27.465219] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:23.012 [2024-10-14 16:48:27.465324] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:23.012 [2024-10-14 16:48:27.465327] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:23.012 [2024-10-14 16:48:27.465333] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465337] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465340] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.465345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.012 [2024-10-14 16:48:27.465355] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.465428] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.012 [2024-10-14 16:48:27.465433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.012 [2024-10-14 16:48:27.465436] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465440] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.012 [2024-10-14 16:48:27.465444] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:23.012 [2024-10-14 16:48:27.465451] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465455] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465458] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.465464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.012 [2024-10-14 16:48:27.465473] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.465536] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.012 [2024-10-14 16:48:27.465542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.012 [2024-10-14 16:48:27.465545] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.012 [2024-10-14 16:48:27.465552] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:23.012 [2024-10-14 16:48:27.465556] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:23.012 [2024-10-14 16:48:27.465562] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:23.012 [2024-10-14 16:48:27.465573] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:23.012 [2024-10-14 16:48:27.465582] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.465591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.012 [2024-10-14 16:48:27.465605] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.465692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.012 [2024-10-14 16:48:27.465698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.012 [2024-10-14 16:48:27.465701] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=4096, cccid=0 00:23:23.012 [2024-10-14 16:48:27.465709] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaac480) on tqpair(0xa4c760): expected_datao=0, payload_size=4096 00:23:23.012 [2024-10-14 16:48:27.465713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465725] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.465729] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.012 [2024-10-14 16:48:27.506740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.012 [2024-10-14 16:48:27.506744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506747] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.012 [2024-10-14 16:48:27.506756] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:23.012 [2024-10-14 16:48:27.506760] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:23.012 [2024-10-14 16:48:27.506764] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:23.012 [2024-10-14 16:48:27.506768] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:23.012 [2024-10-14 16:48:27.506772] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:23.012 [2024-10-14 16:48:27.506776] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:23.012 [2024-10-14 16:48:27.506784] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:23.012 [2024-10-14 16:48:27.506794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506798] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506801] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.506808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:23.012 [2024-10-14 16:48:27.506820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.506882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.012 [2024-10-14 16:48:27.506888] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.012 [2024-10-14 16:48:27.506891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.012 [2024-10-14 16:48:27.506900] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.506914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.012 [2024-10-14 16:48:27.506919] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506926] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.506931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.012 [2024-10-14 16:48:27.506936] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506942] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.506947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.012 [2024-10-14 16:48:27.506952] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506955] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506958] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.506963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.012 [2024-10-14 16:48:27.506968] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:23.012 [2024-10-14 16:48:27.506978] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:23.012 [2024-10-14 16:48:27.506984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.012 [2024-10-14 16:48:27.506987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.012 [2024-10-14 16:48:27.506992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.012 [2024-10-14 16:48:27.507004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac480, cid 0, qid 0 00:23:23.012 [2024-10-14 16:48:27.507009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac600, cid 1, qid 0 00:23:23.012 [2024-10-14 16:48:27.507013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac780, cid 2, qid 0 00:23:23.012 [2024-10-14 16:48:27.507017] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.012 [2024-10-14 16:48:27.507021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.013 [2024-10-14 16:48:27.507116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.013 [2024-10-14 16:48:27.507122] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.013 [2024-10-14 16:48:27.507126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507129] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.013 [2024-10-14 16:48:27.507133] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:23.013 [2024-10-14 16:48:27.507138] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.507147] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.507153] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.507160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507167] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.013 [2024-10-14 16:48:27.507172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:23.013 [2024-10-14 16:48:27.507183] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.013 [2024-10-14 16:48:27.507245] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.013 [2024-10-14 16:48:27.507252] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.013 [2024-10-14 16:48:27.507255] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.013 [2024-10-14 16:48:27.507308] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.507317] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.507324] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.013 [2024-10-14 16:48:27.507333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.013 [2024-10-14 16:48:27.507343] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.013 [2024-10-14 16:48:27.507416] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.013 [2024-10-14 16:48:27.507423] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.013 [2024-10-14 16:48:27.507426] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507430] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=4096, cccid=4 00:23:23.013 [2024-10-14 16:48:27.507434] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaca80) on tqpair(0xa4c760): expected_datao=0, payload_size=4096 00:23:23.013 [2024-10-14 16:48:27.507438] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507448] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.507452] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.551607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.013 [2024-10-14 16:48:27.551618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.013 [2024-10-14 16:48:27.551621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.551625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.013 [2024-10-14 16:48:27.551640] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:23.013 [2024-10-14 16:48:27.551648] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.551658] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.551664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.551667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.013 [2024-10-14 16:48:27.551674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.013 [2024-10-14 16:48:27.551686] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.013 [2024-10-14 16:48:27.551850] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.013 [2024-10-14 16:48:27.551856] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.013 [2024-10-14 16:48:27.551859] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.551862] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=4096, cccid=4 00:23:23.013 [2024-10-14 16:48:27.551866] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaca80) on tqpair(0xa4c760): expected_datao=0, payload_size=4096 00:23:23.013 [2024-10-14 16:48:27.551870] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.551883] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.551887] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.596608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.013 [2024-10-14 16:48:27.596618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.013 [2024-10-14 16:48:27.596622] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.596625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.013 [2024-10-14 16:48:27.596637] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.596647] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:23.013 [2024-10-14 16:48:27.596654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.596658] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.013 [2024-10-14 16:48:27.596665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.013 [2024-10-14 16:48:27.596677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.013 [2024-10-14 16:48:27.596832] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.013 [2024-10-14 16:48:27.596838] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.013 [2024-10-14 16:48:27.596841] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.596844] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=4096, cccid=4 00:23:23.013 [2024-10-14 16:48:27.596848] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaca80) on tqpair(0xa4c760): expected_datao=0, payload_size=4096 00:23:23.013 [2024-10-14 16:48:27.596852] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.596865] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.013 [2024-10-14 16:48:27.596869] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.637743] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.309 [2024-10-14 16:48:27.637756] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.309 [2024-10-14 16:48:27.637759] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.637763] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.309 [2024-10-14 16:48:27.637777] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637785] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637792] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637798] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637805] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637810] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637815] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:23.309 [2024-10-14 16:48:27.637819] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:23.309 [2024-10-14 16:48:27.637823] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:23.309 [2024-10-14 16:48:27.637836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.637840] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.309 [2024-10-14 16:48:27.637847] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.309 [2024-10-14 16:48:27.637853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.637856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.637859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4c760) 00:23:23.309 [2024-10-14 16:48:27.637865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:23.309 [2024-10-14 16:48:27.637877] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.309 [2024-10-14 16:48:27.637881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacc00, cid 5, qid 0 00:23:23.309 [2024-10-14 16:48:27.637978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.309 [2024-10-14 16:48:27.637983] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.309 [2024-10-14 16:48:27.637986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.637990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.309 [2024-10-14 16:48:27.637995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.309 [2024-10-14 16:48:27.638000] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.309 [2024-10-14 16:48:27.638003] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.638006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacc00) on tqpair=0xa4c760 00:23:23.309 [2024-10-14 16:48:27.638015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.638019] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4c760) 00:23:23.309 [2024-10-14 16:48:27.638024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.309 [2024-10-14 16:48:27.638034] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacc00, cid 5, qid 0 00:23:23.309 [2024-10-14 16:48:27.638101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.309 [2024-10-14 16:48:27.638107] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.309 [2024-10-14 16:48:27.638109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.638113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacc00) on tqpair=0xa4c760 00:23:23.309 [2024-10-14 16:48:27.638121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.638125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4c760) 00:23:23.309 [2024-10-14 16:48:27.638130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.309 [2024-10-14 16:48:27.638141] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacc00, cid 5, qid 0 00:23:23.309 [2024-10-14 16:48:27.638200] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.309 [2024-10-14 16:48:27.638206] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.309 [2024-10-14 16:48:27.638209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.638212] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacc00) on tqpair=0xa4c760 00:23:23.309 [2024-10-14 16:48:27.638220] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.309 [2024-10-14 16:48:27.638224] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4c760) 00:23:23.310 [2024-10-14 16:48:27.638229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.310 [2024-10-14 16:48:27.638238] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacc00, cid 5, qid 0 00:23:23.310 [2024-10-14 16:48:27.638300] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.310 [2024-10-14 16:48:27.638306] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.310 [2024-10-14 16:48:27.638309] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638312] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacc00) on tqpair=0xa4c760 00:23:23.310 [2024-10-14 16:48:27.638326] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa4c760) 00:23:23.310 [2024-10-14 16:48:27.638335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.310 [2024-10-14 16:48:27.638341] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa4c760) 00:23:23.310 [2024-10-14 16:48:27.638350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.310 [2024-10-14 16:48:27.638356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638359] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa4c760) 00:23:23.310 [2024-10-14 16:48:27.638364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.310 [2024-10-14 16:48:27.638371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638374] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa4c760) 00:23:23.310 [2024-10-14 16:48:27.638379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.310 [2024-10-14 16:48:27.638389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacc00, cid 5, qid 0 00:23:23.310 [2024-10-14 16:48:27.638394] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaaca80, cid 4, qid 0 00:23:23.310 [2024-10-14 16:48:27.638398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacd80, cid 6, qid 0 00:23:23.310 [2024-10-14 16:48:27.638402] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacf00, cid 7, qid 0 00:23:23.310 [2024-10-14 16:48:27.638542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.310 [2024-10-14 16:48:27.638548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.310 [2024-10-14 16:48:27.638551] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638554] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=8192, cccid=5 00:23:23.310 [2024-10-14 16:48:27.638558] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaacc00) on tqpair(0xa4c760): expected_datao=0, payload_size=8192 00:23:23.310 [2024-10-14 16:48:27.638564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638591] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638595] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638607] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.310 [2024-10-14 16:48:27.638612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.310 [2024-10-14 16:48:27.638615] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=512, cccid=4 00:23:23.310 [2024-10-14 16:48:27.638622] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaaca80) on tqpair(0xa4c760): expected_datao=0, payload_size=512 00:23:23.310 [2024-10-14 16:48:27.638626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638631] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638634] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.310 [2024-10-14 16:48:27.638647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.310 [2024-10-14 16:48:27.638649] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638652] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=512, cccid=6 00:23:23.310 [2024-10-14 16:48:27.638656] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaacd80) on tqpair(0xa4c760): expected_datao=0, payload_size=512 00:23:23.310 [2024-10-14 16:48:27.638660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638666] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638669] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:23.310 [2024-10-14 16:48:27.638678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:23.310 [2024-10-14 16:48:27.638682] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638685] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa4c760): datao=0, datal=4096, cccid=7 00:23:23.310 [2024-10-14 16:48:27.638689] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xaacf00) on tqpair(0xa4c760): expected_datao=0, payload_size=4096 00:23:23.310 [2024-10-14 16:48:27.638692] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638698] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638701] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638708] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.310 [2024-10-14 16:48:27.638713] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.310 [2024-10-14 16:48:27.638716] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638719] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacc00) on tqpair=0xa4c760 00:23:23.310 [2024-10-14 16:48:27.638729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.310 [2024-10-14 16:48:27.638734] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.310 [2024-10-14 16:48:27.638737] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638741] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaaca80) on tqpair=0xa4c760 00:23:23.310 [2024-10-14 16:48:27.638749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.310 [2024-10-14 16:48:27.638754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.310 [2024-10-14 16:48:27.638757] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638760] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacd80) on tqpair=0xa4c760 00:23:23.310 [2024-10-14 16:48:27.638767] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.310 [2024-10-14 16:48:27.638772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.310 [2024-10-14 16:48:27.638775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.310 [2024-10-14 16:48:27.638778] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacf00) on tqpair=0xa4c760 00:23:23.310 ===================================================== 00:23:23.310 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:23.310 ===================================================== 00:23:23.310 Controller Capabilities/Features 00:23:23.310 ================================ 00:23:23.310 Vendor ID: 8086 00:23:23.310 Subsystem Vendor ID: 8086 00:23:23.310 Serial Number: SPDK00000000000001 00:23:23.310 Model Number: SPDK bdev Controller 00:23:23.310 Firmware Version: 25.01 00:23:23.310 Recommended Arb Burst: 6 00:23:23.310 IEEE OUI Identifier: e4 d2 5c 00:23:23.310 Multi-path I/O 00:23:23.310 May have multiple subsystem ports: Yes 00:23:23.310 May have multiple controllers: Yes 00:23:23.310 Associated with SR-IOV VF: No 00:23:23.310 Max Data Transfer Size: 131072 00:23:23.310 Max Number of Namespaces: 32 00:23:23.310 Max Number of I/O Queues: 127 00:23:23.310 NVMe Specification Version (VS): 1.3 00:23:23.310 NVMe Specification Version (Identify): 1.3 00:23:23.310 Maximum Queue Entries: 128 00:23:23.310 Contiguous Queues Required: Yes 00:23:23.310 Arbitration Mechanisms Supported 00:23:23.310 Weighted Round Robin: Not Supported 00:23:23.310 Vendor Specific: Not Supported 00:23:23.310 Reset Timeout: 15000 ms 00:23:23.310 Doorbell Stride: 4 bytes 00:23:23.310 NVM Subsystem Reset: Not Supported 00:23:23.310 Command Sets Supported 00:23:23.310 NVM Command Set: Supported 00:23:23.310 Boot Partition: Not Supported 00:23:23.310 Memory Page Size Minimum: 4096 bytes 00:23:23.310 Memory Page Size Maximum: 4096 bytes 00:23:23.310 Persistent Memory Region: Not Supported 00:23:23.310 Optional Asynchronous Events Supported 00:23:23.310 Namespace Attribute Notices: Supported 00:23:23.310 Firmware Activation Notices: Not Supported 00:23:23.310 ANA Change Notices: Not Supported 00:23:23.310 PLE Aggregate Log Change Notices: Not Supported 00:23:23.310 LBA Status Info Alert Notices: Not Supported 00:23:23.310 EGE Aggregate Log Change Notices: Not Supported 00:23:23.310 Normal NVM Subsystem Shutdown event: Not Supported 00:23:23.310 Zone Descriptor Change Notices: Not Supported 00:23:23.310 Discovery Log Change Notices: Not Supported 00:23:23.310 Controller Attributes 00:23:23.310 128-bit Host Identifier: Supported 00:23:23.310 Non-Operational Permissive Mode: Not Supported 00:23:23.310 NVM Sets: Not Supported 00:23:23.310 Read Recovery Levels: Not Supported 00:23:23.310 Endurance Groups: Not Supported 00:23:23.310 Predictable Latency Mode: Not Supported 00:23:23.310 Traffic Based Keep ALive: Not Supported 00:23:23.310 Namespace Granularity: Not Supported 00:23:23.310 SQ Associations: Not Supported 00:23:23.310 UUID List: Not Supported 00:23:23.310 Multi-Domain Subsystem: Not Supported 00:23:23.310 Fixed Capacity Management: Not Supported 00:23:23.310 Variable Capacity Management: Not Supported 00:23:23.310 Delete Endurance Group: Not Supported 00:23:23.310 Delete NVM Set: Not Supported 00:23:23.310 Extended LBA Formats Supported: Not Supported 00:23:23.310 Flexible Data Placement Supported: Not Supported 00:23:23.310 00:23:23.310 Controller Memory Buffer Support 00:23:23.310 ================================ 00:23:23.310 Supported: No 00:23:23.310 00:23:23.310 Persistent Memory Region Support 00:23:23.310 ================================ 00:23:23.310 Supported: No 00:23:23.310 00:23:23.311 Admin Command Set Attributes 00:23:23.311 ============================ 00:23:23.311 Security Send/Receive: Not Supported 00:23:23.311 Format NVM: Not Supported 00:23:23.311 Firmware Activate/Download: Not Supported 00:23:23.311 Namespace Management: Not Supported 00:23:23.311 Device Self-Test: Not Supported 00:23:23.311 Directives: Not Supported 00:23:23.311 NVMe-MI: Not Supported 00:23:23.311 Virtualization Management: Not Supported 00:23:23.311 Doorbell Buffer Config: Not Supported 00:23:23.311 Get LBA Status Capability: Not Supported 00:23:23.311 Command & Feature Lockdown Capability: Not Supported 00:23:23.311 Abort Command Limit: 4 00:23:23.311 Async Event Request Limit: 4 00:23:23.311 Number of Firmware Slots: N/A 00:23:23.311 Firmware Slot 1 Read-Only: N/A 00:23:23.311 Firmware Activation Without Reset: N/A 00:23:23.311 Multiple Update Detection Support: N/A 00:23:23.311 Firmware Update Granularity: No Information Provided 00:23:23.311 Per-Namespace SMART Log: No 00:23:23.311 Asymmetric Namespace Access Log Page: Not Supported 00:23:23.311 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:23.311 Command Effects Log Page: Supported 00:23:23.311 Get Log Page Extended Data: Supported 00:23:23.311 Telemetry Log Pages: Not Supported 00:23:23.311 Persistent Event Log Pages: Not Supported 00:23:23.311 Supported Log Pages Log Page: May Support 00:23:23.311 Commands Supported & Effects Log Page: Not Supported 00:23:23.311 Feature Identifiers & Effects Log Page:May Support 00:23:23.311 NVMe-MI Commands & Effects Log Page: May Support 00:23:23.311 Data Area 4 for Telemetry Log: Not Supported 00:23:23.311 Error Log Page Entries Supported: 128 00:23:23.311 Keep Alive: Supported 00:23:23.311 Keep Alive Granularity: 10000 ms 00:23:23.311 00:23:23.311 NVM Command Set Attributes 00:23:23.311 ========================== 00:23:23.311 Submission Queue Entry Size 00:23:23.311 Max: 64 00:23:23.311 Min: 64 00:23:23.311 Completion Queue Entry Size 00:23:23.311 Max: 16 00:23:23.311 Min: 16 00:23:23.311 Number of Namespaces: 32 00:23:23.311 Compare Command: Supported 00:23:23.311 Write Uncorrectable Command: Not Supported 00:23:23.311 Dataset Management Command: Supported 00:23:23.311 Write Zeroes Command: Supported 00:23:23.311 Set Features Save Field: Not Supported 00:23:23.311 Reservations: Supported 00:23:23.311 Timestamp: Not Supported 00:23:23.311 Copy: Supported 00:23:23.311 Volatile Write Cache: Present 00:23:23.311 Atomic Write Unit (Normal): 1 00:23:23.311 Atomic Write Unit (PFail): 1 00:23:23.311 Atomic Compare & Write Unit: 1 00:23:23.311 Fused Compare & Write: Supported 00:23:23.311 Scatter-Gather List 00:23:23.311 SGL Command Set: Supported 00:23:23.311 SGL Keyed: Supported 00:23:23.311 SGL Bit Bucket Descriptor: Not Supported 00:23:23.311 SGL Metadata Pointer: Not Supported 00:23:23.311 Oversized SGL: Not Supported 00:23:23.311 SGL Metadata Address: Not Supported 00:23:23.311 SGL Offset: Supported 00:23:23.311 Transport SGL Data Block: Not Supported 00:23:23.311 Replay Protected Memory Block: Not Supported 00:23:23.311 00:23:23.311 Firmware Slot Information 00:23:23.311 ========================= 00:23:23.311 Active slot: 1 00:23:23.311 Slot 1 Firmware Revision: 25.01 00:23:23.311 00:23:23.311 00:23:23.311 Commands Supported and Effects 00:23:23.311 ============================== 00:23:23.311 Admin Commands 00:23:23.311 -------------- 00:23:23.311 Get Log Page (02h): Supported 00:23:23.311 Identify (06h): Supported 00:23:23.311 Abort (08h): Supported 00:23:23.311 Set Features (09h): Supported 00:23:23.311 Get Features (0Ah): Supported 00:23:23.311 Asynchronous Event Request (0Ch): Supported 00:23:23.311 Keep Alive (18h): Supported 00:23:23.311 I/O Commands 00:23:23.311 ------------ 00:23:23.311 Flush (00h): Supported LBA-Change 00:23:23.311 Write (01h): Supported LBA-Change 00:23:23.311 Read (02h): Supported 00:23:23.311 Compare (05h): Supported 00:23:23.311 Write Zeroes (08h): Supported LBA-Change 00:23:23.311 Dataset Management (09h): Supported LBA-Change 00:23:23.311 Copy (19h): Supported LBA-Change 00:23:23.311 00:23:23.311 Error Log 00:23:23.311 ========= 00:23:23.311 00:23:23.311 Arbitration 00:23:23.311 =========== 00:23:23.311 Arbitration Burst: 1 00:23:23.311 00:23:23.311 Power Management 00:23:23.311 ================ 00:23:23.311 Number of Power States: 1 00:23:23.311 Current Power State: Power State #0 00:23:23.311 Power State #0: 00:23:23.311 Max Power: 0.00 W 00:23:23.311 Non-Operational State: Operational 00:23:23.311 Entry Latency: Not Reported 00:23:23.311 Exit Latency: Not Reported 00:23:23.311 Relative Read Throughput: 0 00:23:23.311 Relative Read Latency: 0 00:23:23.311 Relative Write Throughput: 0 00:23:23.311 Relative Write Latency: 0 00:23:23.311 Idle Power: Not Reported 00:23:23.311 Active Power: Not Reported 00:23:23.311 Non-Operational Permissive Mode: Not Supported 00:23:23.311 00:23:23.311 Health Information 00:23:23.311 ================== 00:23:23.311 Critical Warnings: 00:23:23.311 Available Spare Space: OK 00:23:23.311 Temperature: OK 00:23:23.311 Device Reliability: OK 00:23:23.311 Read Only: No 00:23:23.311 Volatile Memory Backup: OK 00:23:23.311 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:23.311 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:23.311 Available Spare: 0% 00:23:23.311 Available Spare Threshold: 0% 00:23:23.311 Life Percentage Used:[2024-10-14 16:48:27.638861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.638866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa4c760) 00:23:23.311 [2024-10-14 16:48:27.638872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.311 [2024-10-14 16:48:27.638883] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaacf00, cid 7, qid 0 00:23:23.311 [2024-10-14 16:48:27.638967] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.311 [2024-10-14 16:48:27.638973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.311 [2024-10-14 16:48:27.638976] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.638979] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaacf00) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639007] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:23.311 [2024-10-14 16:48:27.639017] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac480) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.311 [2024-10-14 16:48:27.639027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac600) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.311 [2024-10-14 16:48:27.639035] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac780) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.311 [2024-10-14 16:48:27.639043] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:23.311 [2024-10-14 16:48:27.639054] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639060] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.311 [2024-10-14 16:48:27.639066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.311 [2024-10-14 16:48:27.639077] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.311 [2024-10-14 16:48:27.639138] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.311 [2024-10-14 16:48:27.639144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.311 [2024-10-14 16:48:27.639147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639156] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639162] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.311 [2024-10-14 16:48:27.639168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.311 [2024-10-14 16:48:27.639181] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.311 [2024-10-14 16:48:27.639254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.311 [2024-10-14 16:48:27.639260] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.311 [2024-10-14 16:48:27.639263] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639266] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.311 [2024-10-14 16:48:27.639270] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:23.311 [2024-10-14 16:48:27.639274] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:23.311 [2024-10-14 16:48:27.639282] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.311 [2024-10-14 16:48:27.639288] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.311 [2024-10-14 16:48:27.639294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.311 [2024-10-14 16:48:27.639303] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.311 [2024-10-14 16:48:27.639373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.311 [2024-10-14 16:48:27.639379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.311 [2024-10-14 16:48:27.639382] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639386] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.639394] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639398] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.639406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.639415] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.639489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.639495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.639498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.639509] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639516] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.639521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.639530] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.639614] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.639620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.639623] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.639635] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639639] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639641] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.639649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.639659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.639721] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.639727] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.639730] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639733] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.639741] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639748] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.639753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.639763] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.639841] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.639847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.639850] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639853] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.639861] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639864] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639867] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.639872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.639884] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.639958] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.639964] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.639967] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639970] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.639978] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.639984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.639990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.639999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.640074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.640080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.640083] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.640094] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640098] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640101] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.640107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.640118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.640180] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.640186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.640189] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.640201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640208] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.640213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.640223] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.640288] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.640294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.640297] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640301] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.640309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640312] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.640321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.640330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.640412] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.640418] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.640421] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.640434] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.640445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.640455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.640523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.640531] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.640534] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640537] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.640545] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640549] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.640552] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.640557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.640567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.644613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.644625] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.644629] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.644632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.644643] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.644647] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.644650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa4c760) 00:23:23.312 [2024-10-14 16:48:27.644657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.312 [2024-10-14 16:48:27.644669] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xaac900, cid 3, qid 0 00:23:23.312 [2024-10-14 16:48:27.644730] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:23.312 [2024-10-14 16:48:27.644736] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:23.312 [2024-10-14 16:48:27.644739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:23.312 [2024-10-14 16:48:27.644742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xaac900) on tqpair=0xa4c760 00:23:23.312 [2024-10-14 16:48:27.644749] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:23.312 0% 00:23:23.312 Data Units Read: 0 00:23:23.312 Data Units Written: 0 00:23:23.312 Host Read Commands: 0 00:23:23.312 Host Write Commands: 0 00:23:23.313 Controller Busy Time: 0 minutes 00:23:23.313 Power Cycles: 0 00:23:23.313 Power On Hours: 0 hours 00:23:23.313 Unsafe Shutdowns: 0 00:23:23.313 Unrecoverable Media Errors: 0 00:23:23.313 Lifetime Error Log Entries: 0 00:23:23.313 Warning Temperature Time: 0 minutes 00:23:23.313 Critical Temperature Time: 0 minutes 00:23:23.313 00:23:23.313 Number of Queues 00:23:23.313 ================ 00:23:23.313 Number of I/O Submission Queues: 127 00:23:23.313 Number of I/O Completion Queues: 127 00:23:23.313 00:23:23.313 Active Namespaces 00:23:23.313 ================= 00:23:23.313 Namespace ID:1 00:23:23.313 Error Recovery Timeout: Unlimited 00:23:23.313 Command Set Identifier: NVM (00h) 00:23:23.313 Deallocate: Supported 00:23:23.313 Deallocated/Unwritten Error: Not Supported 00:23:23.313 Deallocated Read Value: Unknown 00:23:23.313 Deallocate in Write Zeroes: Not Supported 00:23:23.313 Deallocated Guard Field: 0xFFFF 00:23:23.313 Flush: Supported 00:23:23.313 Reservation: Supported 00:23:23.313 Namespace Sharing Capabilities: Multiple Controllers 00:23:23.313 Size (in LBAs): 131072 (0GiB) 00:23:23.313 Capacity (in LBAs): 131072 (0GiB) 00:23:23.313 Utilization (in LBAs): 131072 (0GiB) 00:23:23.313 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:23.313 EUI64: ABCDEF0123456789 00:23:23.313 UUID: 041b0935-1fdd-4f1b-b95a-aa40f345966c 00:23:23.313 Thin Provisioning: Not Supported 00:23:23.313 Per-NS Atomic Units: Yes 00:23:23.313 Atomic Boundary Size (Normal): 0 00:23:23.313 Atomic Boundary Size (PFail): 0 00:23:23.313 Atomic Boundary Offset: 0 00:23:23.313 Maximum Single Source Range Length: 65535 00:23:23.313 Maximum Copy Length: 65535 00:23:23.313 Maximum Source Range Count: 1 00:23:23.313 NGUID/EUI64 Never Reused: No 00:23:23.313 Namespace Write Protected: No 00:23:23.313 Number of LBA Formats: 1 00:23:23.313 Current LBA Format: LBA Format #00 00:23:23.313 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:23.313 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.313 rmmod nvme_tcp 00:23:23.313 rmmod nvme_fabrics 00:23:23.313 rmmod nvme_keyring 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 621391 ']' 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 621391 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 621391 ']' 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 621391 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 621391 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 621391' 00:23:23.313 killing process with pid 621391 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 621391 00:23:23.313 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 621391 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.607 16:48:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.512 00:23:25.512 real 0m9.455s 00:23:25.512 user 0m5.883s 00:23:25.512 sys 0m4.867s 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:25.512 ************************************ 00:23:25.512 END TEST nvmf_identify 00:23:25.512 ************************************ 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.512 ************************************ 00:23:25.512 START TEST nvmf_perf 00:23:25.512 ************************************ 00:23:25.512 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:25.772 * Looking for test storage... 00:23:25.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:25.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.772 --rc genhtml_branch_coverage=1 00:23:25.772 --rc genhtml_function_coverage=1 00:23:25.772 --rc genhtml_legend=1 00:23:25.772 --rc geninfo_all_blocks=1 00:23:25.772 --rc geninfo_unexecuted_blocks=1 00:23:25.772 00:23:25.772 ' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:25.772 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.773 16:48:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.341 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.342 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.342 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.342 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.342 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:23:32.342 00:23:32.342 --- 10.0.0.2 ping statistics --- 00:23:32.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.342 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:23:32.342 00:23:32.342 --- 10.0.0.1 ping statistics --- 00:23:32.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.342 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=624976 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 624976 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 624976 ']' 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.342 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.342 [2024-10-14 16:48:36.349095] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:32.342 [2024-10-14 16:48:36.349137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.342 [2024-10-14 16:48:36.421689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.342 [2024-10-14 16:48:36.464088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.342 [2024-10-14 16:48:36.464125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.342 [2024-10-14 16:48:36.464133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.342 [2024-10-14 16:48:36.464139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.342 [2024-10-14 16:48:36.464144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.342 [2024-10-14 16:48:36.465657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.342 [2024-10-14 16:48:36.465771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.343 [2024-10-14 16:48:36.465878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.343 [2024-10-14 16:48:36.465879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:32.343 16:48:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:35.625 16:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:35.625 16:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:35.625 16:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:35.625 16:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:35.625 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:35.625 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:35.626 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:35.626 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:35.626 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.626 [2024-10-14 16:48:40.230636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.883 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:35.883 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:35.884 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.142 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:36.142 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:36.400 16:48:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.658 [2024-10-14 16:48:41.057763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.658 16:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:36.658 16:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:36.658 16:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:36.658 16:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:36.658 16:48:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:38.032 Initializing NVMe Controllers 00:23:38.032 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:38.032 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:38.032 Initialization complete. Launching workers. 00:23:38.032 ======================================================== 00:23:38.032 Latency(us) 00:23:38.032 Device Information : IOPS MiB/s Average min max 00:23:38.032 PCIE (0000:5e:00.0) NSID 1 from core 0: 98307.51 384.01 324.97 22.92 5905.33 00:23:38.032 ======================================================== 00:23:38.032 Total : 98307.51 384.01 324.97 22.92 5905.33 00:23:38.032 00:23:38.032 16:48:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:39.407 Initializing NVMe Controllers 00:23:39.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:39.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:39.407 Initialization complete. Launching workers. 00:23:39.407 ======================================================== 00:23:39.407 Latency(us) 00:23:39.407 Device Information : IOPS MiB/s Average min max 00:23:39.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.00 0.45 9084.86 106.88 45734.01 00:23:39.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21820.85 7219.00 47885.05 00:23:39.407 ======================================================== 00:23:39.407 Total : 160.00 0.62 12746.46 106.88 47885.05 00:23:39.407 00:23:39.407 16:48:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:40.781 Initializing NVMe Controllers 00:23:40.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:40.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:40.781 Initialization complete. Launching workers. 00:23:40.781 ======================================================== 00:23:40.781 Latency(us) 00:23:40.781 Device Information : IOPS MiB/s Average min max 00:23:40.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11190.00 43.71 2860.23 426.99 9196.34 00:23:40.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3781.00 14.77 8524.93 7149.34 16567.71 00:23:40.781 ======================================================== 00:23:40.781 Total : 14971.00 58.48 4290.87 426.99 16567.71 00:23:40.781 00:23:40.781 16:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:40.781 16:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:40.781 16:48:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.312 Initializing NVMe Controllers 00:23:43.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.312 Controller IO queue size 128, less than required. 00:23:43.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.312 Controller IO queue size 128, less than required. 00:23:43.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.312 Initialization complete. Launching workers. 00:23:43.312 ======================================================== 00:23:43.312 Latency(us) 00:23:43.312 Device Information : IOPS MiB/s Average min max 00:23:43.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1807.84 451.96 71854.60 53289.73 121999.41 00:23:43.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.78 150.94 219626.31 63500.62 367312.53 00:23:43.312 ======================================================== 00:23:43.312 Total : 2411.62 602.90 108851.08 53289.73 367312.53 00:23:43.312 00:23:43.312 16:48:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:43.569 No valid NVMe controllers or AIO or URING devices found 00:23:43.569 Initializing NVMe Controllers 00:23:43.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.569 Controller IO queue size 128, less than required. 00:23:43.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:43.569 Controller IO queue size 128, less than required. 00:23:43.569 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:43.569 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:43.569 WARNING: Some requested NVMe devices were skipped 00:23:43.569 16:48:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:46.100 Initializing NVMe Controllers 00:23:46.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.100 Controller IO queue size 128, less than required. 00:23:46.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.100 Controller IO queue size 128, less than required. 00:23:46.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:46.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:46.100 Initialization complete. Launching workers. 00:23:46.100 00:23:46.100 ==================== 00:23:46.100 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:46.100 TCP transport: 00:23:46.100 polls: 17026 00:23:46.100 idle_polls: 13151 00:23:46.100 sock_completions: 3875 00:23:46.100 nvme_completions: 6283 00:23:46.100 submitted_requests: 9380 00:23:46.100 queued_requests: 1 00:23:46.100 00:23:46.100 ==================== 00:23:46.100 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:46.100 TCP transport: 00:23:46.100 polls: 17747 00:23:46.100 idle_polls: 13554 00:23:46.100 sock_completions: 4193 00:23:46.100 nvme_completions: 6393 00:23:46.100 submitted_requests: 9526 00:23:46.100 queued_requests: 1 00:23:46.100 ======================================================== 00:23:46.100 Latency(us) 00:23:46.100 Device Information : IOPS MiB/s Average min max 00:23:46.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1570.00 392.50 83546.25 53526.54 128528.98 00:23:46.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1597.49 399.37 80924.53 41354.47 130286.98 00:23:46.100 ======================================================== 00:23:46.100 Total : 3167.50 791.87 82224.01 41354.47 130286.98 00:23:46.100 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:46.100 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:46.100 rmmod nvme_tcp 00:23:46.100 rmmod nvme_fabrics 00:23:46.359 rmmod nvme_keyring 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 624976 ']' 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 624976 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 624976 ']' 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 624976 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 624976 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 624976' 00:23:46.359 killing process with pid 624976 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 624976 00:23:46.359 16:48:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 624976 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.889 16:48:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.794 16:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.794 00:23:50.794 real 0m24.865s 00:23:50.794 user 1m5.491s 00:23:50.794 sys 0m8.260s 00:23:50.794 16:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.794 16:48:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:50.794 ************************************ 00:23:50.794 END TEST nvmf_perf 00:23:50.794 ************************************ 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.794 ************************************ 00:23:50.794 START TEST nvmf_fio_host 00:23:50.794 ************************************ 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:50.794 * Looking for test storage... 00:23:50.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.794 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.795 --rc genhtml_branch_coverage=1 00:23:50.795 --rc genhtml_function_coverage=1 00:23:50.795 --rc genhtml_legend=1 00:23:50.795 --rc geninfo_all_blocks=1 00:23:50.795 --rc geninfo_unexecuted_blocks=1 00:23:50.795 00:23:50.795 ' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.795 --rc genhtml_branch_coverage=1 00:23:50.795 --rc genhtml_function_coverage=1 00:23:50.795 --rc genhtml_legend=1 00:23:50.795 --rc geninfo_all_blocks=1 00:23:50.795 --rc geninfo_unexecuted_blocks=1 00:23:50.795 00:23:50.795 ' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.795 --rc genhtml_branch_coverage=1 00:23:50.795 --rc genhtml_function_coverage=1 00:23:50.795 --rc genhtml_legend=1 00:23:50.795 --rc geninfo_all_blocks=1 00:23:50.795 --rc geninfo_unexecuted_blocks=1 00:23:50.795 00:23:50.795 ' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:50.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.795 --rc genhtml_branch_coverage=1 00:23:50.795 --rc genhtml_function_coverage=1 00:23:50.795 --rc genhtml_legend=1 00:23:50.795 --rc geninfo_all_blocks=1 00:23:50.795 --rc geninfo_unexecuted_blocks=1 00:23:50.795 00:23:50.795 ' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:50.795 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.796 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.796 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.796 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:50.796 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:50.796 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.796 16:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.360 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.361 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.361 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.361 16:49:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:23:57.361 00:23:57.361 --- 10.0.0.2 ping statistics --- 00:23:57.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.361 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:23:57.361 00:23:57.361 --- 10.0.0.1 ping statistics --- 00:23:57.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.361 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=631296 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 631296 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 631296 ']' 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.361 [2024-10-14 16:49:01.266098] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:23:57.361 [2024-10-14 16:49:01.266140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.361 [2024-10-14 16:49:01.334266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.361 [2024-10-14 16:49:01.376440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.361 [2024-10-14 16:49:01.376477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.361 [2024-10-14 16:49:01.376484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.361 [2024-10-14 16:49:01.376490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.361 [2024-10-14 16:49:01.376495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.361 [2024-10-14 16:49:01.378046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.361 [2024-10-14 16:49:01.378154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.361 [2024-10-14 16:49:01.378262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.361 [2024-10-14 16:49:01.378262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.361 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.362 [2024-10-14 16:49:01.642621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:57.362 Malloc1 00:23:57.362 16:49:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.619 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.877 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.134 [2024-10-14 16:49:02.519651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.134 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:58.402 16:49:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:58.658 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:58.658 fio-3.35 00:23:58.658 Starting 1 thread 00:24:01.172 00:24:01.172 test: (groupid=0, jobs=1): err= 0: pid=631675: Mon Oct 14 16:49:05 2024 00:24:01.172 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(86.4MiB/2005msec) 00:24:01.172 slat (nsec): min=1540, max=240584, avg=1742.92, stdev=2279.60 00:24:01.172 clat (usec): min=3157, max=10024, avg=6411.94, stdev=656.31 00:24:01.172 lat (usec): min=3190, max=10026, avg=6413.68, stdev=656.21 00:24:01.172 clat percentiles (usec): 00:24:01.172 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5604], 20.00th=[ 5866], 00:24:01.172 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6521], 00:24:01.172 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7570], 00:24:01.172 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[ 8717], 99.95th=[ 9110], 00:24:01.172 | 99.99th=[ 9896] 00:24:01.172 bw ( KiB/s): min=41072, max=47440, per=99.94%, avg=44082.00, stdev=2994.75, samples=4 00:24:01.173 iops : min=10268, max=11860, avg=11020.50, stdev=748.69, samples=4 00:24:01.173 write: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(86.1MiB/2005msec); 0 zone resets 00:24:01.173 slat (nsec): min=1570, max=227345, avg=1809.49, stdev=1707.59 00:24:01.173 clat (usec): min=2422, max=9153, avg=5177.79, stdev=538.16 00:24:01.173 lat (usec): min=2438, max=9155, avg=5179.60, stdev=538.11 00:24:01.173 clat percentiles (usec): 00:24:01.173 | 1.00th=[ 4113], 5.00th=[ 4359], 10.00th=[ 4555], 20.00th=[ 4686], 00:24:01.173 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5276], 00:24:01.173 | 70.00th=[ 5473], 80.00th=[ 5604], 90.00th=[ 5866], 95.00th=[ 6128], 00:24:01.173 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7242], 99.95th=[ 8455], 00:24:01.173 | 99.99th=[ 9110] 00:24:01.173 bw ( KiB/s): min=41608, max=47392, per=99.99%, avg=43972.00, stdev=2835.63, samples=4 00:24:01.173 iops : min=10402, max=11848, avg=10993.00, stdev=708.91, samples=4 00:24:01.173 lat (msec) : 4=0.31%, 10=99.69%, 20=0.01% 00:24:01.173 cpu : usr=73.40%, sys=25.70%, ctx=79, majf=0, minf=2 00:24:01.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:01.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:01.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:01.173 issued rwts: total=22110,22044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:01.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:01.173 00:24:01.173 Run status group 0 (all jobs): 00:24:01.173 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=86.4MiB (90.6MB), run=2005-2005msec 00:24:01.173 WRITE: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=86.1MiB (90.3MB), run=2005-2005msec 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:01.173 16:49:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:01.173 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:01.173 fio-3.35 00:24:01.173 Starting 1 thread 00:24:03.692 00:24:03.692 test: (groupid=0, jobs=1): err= 0: pid=632240: Mon Oct 14 16:49:08 2024 00:24:03.692 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(338MiB/2007msec) 00:24:03.692 slat (usec): min=2, max=105, avg= 2.91, stdev= 1.53 00:24:03.692 clat (usec): min=1812, max=13259, avg=6830.64, stdev=1558.92 00:24:03.692 lat (usec): min=1815, max=13262, avg=6833.55, stdev=1559.01 00:24:03.692 clat percentiles (usec): 00:24:03.692 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5473], 00:24:03.692 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7242], 00:24:03.692 | 70.00th=[ 7570], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9503], 00:24:03.692 | 99.00th=[10945], 99.50th=[11338], 99.90th=[12649], 99.95th=[12911], 00:24:03.692 | 99.99th=[13173] 00:24:03.692 bw ( KiB/s): min=82432, max=93892, per=51.01%, avg=87913.00, stdev=5858.40, samples=4 00:24:03.692 iops : min= 5152, max= 5868, avg=5494.50, stdev=366.07, samples=4 00:24:03.692 write: IOPS=6393, BW=99.9MiB/s (105MB/s)(180MiB/1802msec); 0 zone resets 00:24:03.692 slat (usec): min=29, max=256, avg=32.23, stdev= 6.61 00:24:03.692 clat (usec): min=3857, max=16274, avg=8548.72, stdev=1423.56 00:24:03.692 lat (usec): min=3886, max=16308, avg=8580.95, stdev=1424.42 00:24:03.692 clat percentiles (usec): 00:24:03.692 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:24:03.692 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8356], 60.00th=[ 8717], 00:24:03.692 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11076], 00:24:03.692 | 99.00th=[12256], 99.50th=[12911], 99.90th=[15270], 99.95th=[15664], 00:24:03.692 | 99.99th=[16188] 00:24:03.692 bw ( KiB/s): min=86272, max=98107, per=89.79%, avg=91854.75, stdev=5531.40, samples=4 00:24:03.692 iops : min= 5392, max= 6131, avg=5740.75, stdev=345.45, samples=4 00:24:03.692 lat (msec) : 2=0.02%, 4=1.48%, 10=90.78%, 20=7.73% 00:24:03.692 cpu : usr=83.50%, sys=14.06%, ctx=115, majf=0, minf=2 00:24:03.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:03.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.692 issued rwts: total=21618,11521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.692 00:24:03.692 Run status group 0 (all jobs): 00:24:03.692 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=338MiB (354MB), run=2007-2007msec 00:24:03.692 WRITE: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=180MiB (189MB), run=1802-1802msec 00:24:03.692 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.950 rmmod nvme_tcp 00:24:03.950 rmmod nvme_fabrics 00:24:03.950 rmmod nvme_keyring 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 631296 ']' 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 631296 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 631296 ']' 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 631296 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 631296 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 631296' 00:24:03.950 killing process with pid 631296 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 631296 00:24:03.950 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 631296 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.208 16:49:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.741 00:24:06.741 real 0m15.684s 00:24:06.741 user 0m46.144s 00:24:06.741 sys 0m6.483s 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.741 ************************************ 00:24:06.741 END TEST nvmf_fio_host 00:24:06.741 ************************************ 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.741 ************************************ 00:24:06.741 START TEST nvmf_failover 00:24:06.741 ************************************ 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:06.741 * Looking for test storage... 00:24:06.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:06.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.741 --rc genhtml_branch_coverage=1 00:24:06.741 --rc genhtml_function_coverage=1 00:24:06.741 --rc genhtml_legend=1 00:24:06.741 --rc geninfo_all_blocks=1 00:24:06.741 --rc geninfo_unexecuted_blocks=1 00:24:06.741 00:24:06.741 ' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:06.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.741 --rc genhtml_branch_coverage=1 00:24:06.741 --rc genhtml_function_coverage=1 00:24:06.741 --rc genhtml_legend=1 00:24:06.741 --rc geninfo_all_blocks=1 00:24:06.741 --rc geninfo_unexecuted_blocks=1 00:24:06.741 00:24:06.741 ' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:06.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.741 --rc genhtml_branch_coverage=1 00:24:06.741 --rc genhtml_function_coverage=1 00:24:06.741 --rc genhtml_legend=1 00:24:06.741 --rc geninfo_all_blocks=1 00:24:06.741 --rc geninfo_unexecuted_blocks=1 00:24:06.741 00:24:06.741 ' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:06.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.741 --rc genhtml_branch_coverage=1 00:24:06.741 --rc genhtml_function_coverage=1 00:24:06.741 --rc genhtml_legend=1 00:24:06.741 --rc geninfo_all_blocks=1 00:24:06.741 --rc geninfo_unexecuted_blocks=1 00:24:06.741 00:24:06.741 ' 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.741 16:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.741 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.742 16:49:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.304 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:13.305 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:13.305 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:13.305 Found net devices under 0000:86:00.0: cvl_0_0 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:13.305 Found net devices under 0000:86:00.1: cvl_0_1 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:24:13.305 00:24:13.305 --- 10.0.0.2 ping statistics --- 00:24:13.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.305 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:13.305 00:24:13.305 --- 10.0.0.1 ping statistics --- 00:24:13.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.305 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=636227 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 636227 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 636227 ']' 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.305 16:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:13.305 [2024-10-14 16:49:17.025052] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:24:13.305 [2024-10-14 16:49:17.025098] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.306 [2024-10-14 16:49:17.095839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.306 [2024-10-14 16:49:17.135022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.306 [2024-10-14 16:49:17.135060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.306 [2024-10-14 16:49:17.135068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.306 [2024-10-14 16:49:17.135075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.306 [2024-10-14 16:49:17.135080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.306 [2024-10-14 16:49:17.136477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.306 [2024-10-14 16:49:17.136582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.306 [2024-10-14 16:49:17.136582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.306 [2024-10-14 16:49:17.444728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:13.306 Malloc0 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.306 16:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.563 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.819 [2024-10-14 16:49:18.280743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.819 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:14.075 [2024-10-14 16:49:18.477277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:14.075 [2024-10-14 16:49:18.673903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=636490 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 636490 /var/tmp/bdevperf.sock 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 636490 ']' 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.075 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.076 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.076 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.076 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:14.334 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.334 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:14.334 16:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:14.898 NVMe0n1 00:24:14.898 16:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:15.155 00:24:15.155 16:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.155 16:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=636716 00:24:15.155 16:49:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:16.086 16:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.344 [2024-10-14 16:49:20.763147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 [2024-10-14 16:49:20.763440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def390 is same with the state(6) to be set 00:24:16.344 16:49:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:19.625 16:49:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.625 00:24:19.625 16:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.625 [2024-10-14 16:49:24.256619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.625 [2024-10-14 16:49:24.256948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0190 is same with the state(6) to be set 00:24:19.884 16:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:23.169 16:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.169 [2024-10-14 16:49:27.468688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.169 16:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:24.105 16:49:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:24.105 [2024-10-14 16:49:28.684201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.105 [2024-10-14 16:49:28.684439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 [2024-10-14 16:49:28.684556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0ef0 is same with the state(6) to be set 00:24:24.106 16:49:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 636716 00:24:30.769 { 00:24:30.769 "results": [ 00:24:30.769 { 00:24:30.769 "job": "NVMe0n1", 00:24:30.769 "core_mask": "0x1", 00:24:30.769 "workload": "verify", 00:24:30.769 "status": "finished", 00:24:30.769 "verify_range": { 00:24:30.769 "start": 0, 00:24:30.769 "length": 16384 00:24:30.769 }, 00:24:30.769 "queue_depth": 128, 00:24:30.769 "io_size": 4096, 00:24:30.769 "runtime": 15.001897, 00:24:30.769 "iops": 11171.720483082907, 00:24:30.769 "mibps": 43.639533137042605, 00:24:30.769 "io_failed": 10269, 00:24:30.769 "io_timeout": 0, 00:24:30.769 "avg_latency_us": 10774.737715722858, 00:24:30.769 "min_latency_us": 417.40190476190475, 00:24:30.769 "max_latency_us": 21470.841904761906 00:24:30.769 } 00:24:30.769 ], 00:24:30.769 "core_count": 1 00:24:30.769 } 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 636490 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 636490 ']' 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 636490 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 636490 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 636490' 00:24:30.769 killing process with pid 636490 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 636490 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 636490 00:24:30.769 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:30.769 [2024-10-14 16:49:18.748883] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:24:30.769 [2024-10-14 16:49:18.748937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636490 ] 00:24:30.769 [2024-10-14 16:49:18.817379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.769 [2024-10-14 16:49:18.861429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.769 Running I/O for 15 seconds... 00:24:30.769 11239.00 IOPS, 43.90 MiB/s [2024-10-14T14:49:35.403Z] [2024-10-14 16:49:20.764374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.769 [2024-10-14 16:49:20.764711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-10-14 16:49:20.764717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.770 [2024-10-14 16:49:20.764987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.764995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.770 [2024-10-14 16:49:20.765284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.770 [2024-10-14 16:49:20.765292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.771 [2024-10-14 16:49:20.765684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101728 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101736 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101744 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101752 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101760 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101768 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101776 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.771 [2024-10-14 16:49:20.765891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101784 len:8 PRP1 0x0 PRP2 0x0 00:24:30.771 [2024-10-14 16:49:20.765897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.771 [2024-10-14 16:49:20.765904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.771 [2024-10-14 16:49:20.765916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101792 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.765928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.765935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.765940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.765945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101800 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.765951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.765957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.765962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.765967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101808 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.765974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.765980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.765985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.765991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101816 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.765997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101824 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101832 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101840 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101848 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101856 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101864 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101872 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101888 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101896 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101904 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101912 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101920 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101928 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101936 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.766348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101944 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.766359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.772 [2024-10-14 16:49:20.766366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.772 [2024-10-14 16:49:20.776610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.772 [2024-10-14 16:49:20.776623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 00:24:30.772 [2024-10-14 16:49:20.776633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101960 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101968 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101976 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101984 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101992 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102000 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102008 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102016 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102024 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101320 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.776967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.776973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101328 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.776982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.776990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.773 [2024-10-14 16:49:20.777174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.773 [2024-10-14 16:49:20.777182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101336 len:8 PRP1 0x0 PRP2 0x0 00:24:30.773 [2024-10-14 16:49:20.777190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.777287] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e3a4e0 was disconnected and freed. reset controller. 00:24:30.773 [2024-10-14 16:49:20.777299] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:30.773 [2024-10-14 16:49:20.777328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.773 [2024-10-14 16:49:20.777338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.777348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.773 [2024-10-14 16:49:20.777357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.777366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.773 [2024-10-14 16:49:20.777375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.777384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.773 [2024-10-14 16:49:20.777392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:20.777400] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.773 [2024-10-14 16:49:20.777433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17400 (9): Bad file descriptor 00:24:30.773 [2024-10-14 16:49:20.781165] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.773 [2024-10-14 16:49:20.856001] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:30.773 10760.00 IOPS, 42.03 MiB/s [2024-10-14T14:49:35.407Z] 11023.33 IOPS, 43.06 MiB/s [2024-10-14T14:49:35.407Z] 11140.25 IOPS, 43.52 MiB/s [2024-10-14T14:49:35.407Z] [2024-10-14 16:49:24.257730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.773 [2024-10-14 16:49:24.257762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:24.257776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.773 [2024-10-14 16:49:24.257788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:24.257797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.773 [2024-10-14 16:49:24.257804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:24.257812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.773 [2024-10-14 16:49:24.257818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:24.257826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.773 [2024-10-14 16:49:24.257833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.773 [2024-10-14 16:49:24.257841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.773 [2024-10-14 16:49:24.257847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.257986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.257993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.774 [2024-10-14 16:49:24.258413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.774 [2024-10-14 16:49:24.258419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.775 [2024-10-14 16:49:24.258783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.775 [2024-10-14 16:49:24.258975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.775 [2024-10-14 16:49:24.258981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.258989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.258995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.776 [2024-10-14 16:49:24.259109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.776 [2024-10-14 16:49:24.259550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.776 [2024-10-14 16:49:24.259556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.777 [2024-10-14 16:49:24.259583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44984 len:8 PRP1 0x0 PRP2 0x0 00:24:30.777 [2024-10-14 16:49:24.259589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.777 [2024-10-14 16:49:24.259610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.777 [2024-10-14 16:49:24.259621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44992 len:8 PRP1 0x0 PRP2 0x0 00:24:30.777 [2024-10-14 16:49:24.259628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259668] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f70bd0 was disconnected and freed. reset controller. 00:24:30.777 [2024-10-14 16:49:24.259677] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:30.777 [2024-10-14 16:49:24.259697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.777 [2024-10-14 16:49:24.259704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.777 [2024-10-14 16:49:24.259718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.777 [2024-10-14 16:49:24.259732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.777 [2024-10-14 16:49:24.259745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:24.259751] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.777 [2024-10-14 16:49:24.262519] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.777 [2024-10-14 16:49:24.262549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17400 (9): Bad file descriptor 00:24:30.777 [2024-10-14 16:49:24.335299] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:30.777 10997.00 IOPS, 42.96 MiB/s [2024-10-14T14:49:35.411Z] 11078.67 IOPS, 43.28 MiB/s [2024-10-14T14:49:35.411Z] 11142.43 IOPS, 43.53 MiB/s [2024-10-14T14:49:35.411Z] 11149.50 IOPS, 43.55 MiB/s [2024-10-14T14:49:35.411Z] 11160.11 IOPS, 43.59 MiB/s [2024-10-14T14:49:35.411Z] [2024-10-14 16:49:28.684811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.684987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.684995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.777 [2024-10-14 16:49:28.685287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.777 [2024-10-14 16:49:28.685294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.778 [2024-10-14 16:49:28.685527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.778 [2024-10-14 16:49:28.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.778 [2024-10-14 16:49:28.685788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.685988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.685997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.779 [2024-10-14 16:49:28.686216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74096 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.779 [2024-10-14 16:49:28.686278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74104 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.779 [2024-10-14 16:49:28.686301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74112 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.779 [2024-10-14 16:49:28.686324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74120 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.779 [2024-10-14 16:49:28.686346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74128 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.779 [2024-10-14 16:49:28.686370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74136 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.779 [2024-10-14 16:49:28.686389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.779 [2024-10-14 16:49:28.686397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.779 [2024-10-14 16:49:28.686402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74144 len:8 PRP1 0x0 PRP2 0x0 00:24:30.779 [2024-10-14 16:49:28.686408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74152 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74160 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74168 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74176 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74184 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74192 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74200 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74208 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74216 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74224 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74232 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74240 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74248 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74256 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.686740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.686746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.686752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74264 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.686758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.697675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.697686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.697694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74272 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.697703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.697712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.697719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.697725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74280 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.697734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.697743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.697750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.697758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74288 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.697766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.697775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.697781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.697788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74296 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.697796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.697805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.697811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.697818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74304 len:8 PRP1 0x0 PRP2 0x0 00:24:30.780 [2024-10-14 16:49:28.697826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.780 [2024-10-14 16:49:28.697835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.780 [2024-10-14 16:49:28.697842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.780 [2024-10-14 16:49:28.697848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74312 len:8 PRP1 0x0 PRP2 0x0 00:24:30.781 [2024-10-14 16:49:28.697856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.697865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.781 [2024-10-14 16:49:28.697872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.781 [2024-10-14 16:49:28.697879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74320 len:8 PRP1 0x0 PRP2 0x0 00:24:30.781 [2024-10-14 16:49:28.697887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.697898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.781 [2024-10-14 16:49:28.697904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.781 [2024-10-14 16:49:28.697911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74328 len:8 PRP1 0x0 PRP2 0x0 00:24:30.781 [2024-10-14 16:49:28.697919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.697928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.781 [2024-10-14 16:49:28.697934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.781 [2024-10-14 16:49:28.697941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74336 len:8 PRP1 0x0 PRP2 0x0 00:24:30.781 [2024-10-14 16:49:28.697950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.697959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:30.781 [2024-10-14 16:49:28.697965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:30.781 [2024-10-14 16:49:28.697972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74344 len:8 PRP1 0x0 PRP2 0x0 00:24:30.781 [2024-10-14 16:49:28.697980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.698026] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f711c0 was disconnected and freed. reset controller. 00:24:30.781 [2024-10-14 16:49:28.698037] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:30.781 [2024-10-14 16:49:28.698073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.781 [2024-10-14 16:49:28.698086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.698096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.781 [2024-10-14 16:49:28.698105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.698114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.781 [2024-10-14 16:49:28.698122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.698132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.781 [2024-10-14 16:49:28.698140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.781 [2024-10-14 16:49:28.698149] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:30.781 [2024-10-14 16:49:28.698188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e17400 (9): Bad file descriptor 00:24:30.781 [2024-10-14 16:49:28.701909] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:30.781 [2024-10-14 16:49:28.778798] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:30.781 11080.00 IOPS, 43.28 MiB/s [2024-10-14T14:49:35.415Z] 11096.00 IOPS, 43.34 MiB/s [2024-10-14T14:49:35.415Z] 11127.58 IOPS, 43.47 MiB/s [2024-10-14T14:49:35.415Z] 11137.00 IOPS, 43.50 MiB/s [2024-10-14T14:49:35.415Z] 11159.50 IOPS, 43.59 MiB/s 00:24:30.781 Latency(us) 00:24:30.781 [2024-10-14T14:49:35.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.781 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:30.781 Verification LBA range: start 0x0 length 0x4000 00:24:30.781 NVMe0n1 : 15.00 11171.72 43.64 684.51 0.00 10774.74 417.40 21470.84 00:24:30.781 [2024-10-14T14:49:35.415Z] =================================================================================================================== 00:24:30.781 [2024-10-14T14:49:35.415Z] Total : 11171.72 43.64 684.51 0.00 10774.74 417.40 21470.84 00:24:30.781 Received shutdown signal, test time was about 15.000000 seconds 00:24:30.781 00:24:30.781 Latency(us) 00:24:30.781 [2024-10-14T14:49:35.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.781 [2024-10-14T14:49:35.415Z] =================================================================================================================== 00:24:30.781 [2024-10-14T14:49:35.415Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=639154 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 639154 /var/tmp/bdevperf.sock 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 639154 ']' 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.781 16:49:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:30.781 16:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.781 16:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:30.781 16:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.781 [2024-10-14 16:49:35.341573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.781 16:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:31.040 [2024-10-14 16:49:35.526075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:31.040 16:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.299 NVMe0n1 00:24:31.299 16:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:31.867 00:24:31.867 16:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:32.126 00:24:32.126 16:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.126 16:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:32.385 16:49:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.385 16:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:35.673 16:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.673 16:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:35.673 16:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.673 16:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=639949 00:24:35.673 16:49:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 639949 00:24:37.050 { 00:24:37.050 "results": [ 00:24:37.050 { 00:24:37.050 "job": "NVMe0n1", 00:24:37.050 "core_mask": "0x1", 00:24:37.050 "workload": "verify", 00:24:37.050 "status": "finished", 00:24:37.050 "verify_range": { 00:24:37.050 "start": 0, 00:24:37.050 "length": 16384 00:24:37.050 }, 00:24:37.050 "queue_depth": 128, 00:24:37.050 "io_size": 4096, 00:24:37.050 "runtime": 1.005174, 00:24:37.050 "iops": 11325.402368147206, 00:24:37.050 "mibps": 44.239853000575025, 00:24:37.050 "io_failed": 0, 00:24:37.050 "io_timeout": 0, 00:24:37.050 "avg_latency_us": 11262.434684603288, 00:24:37.050 "min_latency_us": 2168.9295238095237, 00:24:37.050 "max_latency_us": 9362.285714285714 00:24:37.050 } 00:24:37.050 ], 00:24:37.050 "core_count": 1 00:24:37.050 } 00:24:37.050 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.050 [2024-10-14 16:49:34.974615] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:24:37.050 [2024-10-14 16:49:34.974675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639154 ] 00:24:37.050 [2024-10-14 16:49:35.042531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.050 [2024-10-14 16:49:35.080854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.050 [2024-10-14 16:49:36.981576] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:37.050 [2024-10-14 16:49:36.981629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.050 [2024-10-14 16:49:36.981640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.051 [2024-10-14 16:49:36.981649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.051 [2024-10-14 16:49:36.981656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.051 [2024-10-14 16:49:36.981663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.051 [2024-10-14 16:49:36.981670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.051 [2024-10-14 16:49:36.981678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.051 [2024-10-14 16:49:36.981684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.051 [2024-10-14 16:49:36.981691] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.051 [2024-10-14 16:49:36.981715] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.051 [2024-10-14 16:49:36.981728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f4400 (9): Bad file descriptor 00:24:37.051 [2024-10-14 16:49:37.115763] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.051 Running I/O for 1 seconds... 00:24:37.051 11256.00 IOPS, 43.97 MiB/s 00:24:37.051 Latency(us) 00:24:37.051 [2024-10-14T14:49:41.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.051 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:37.051 Verification LBA range: start 0x0 length 0x4000 00:24:37.051 NVMe0n1 : 1.01 11325.40 44.24 0.00 0.00 11262.43 2168.93 9362.29 00:24:37.051 [2024-10-14T14:49:41.685Z] =================================================================================================================== 00:24:37.051 [2024-10-14T14:49:41.685Z] Total : 11325.40 44.24 0.00 0.00 11262.43 2168.93 9362.29 00:24:37.051 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.051 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:37.051 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.310 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.310 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:37.310 16:49:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.568 16:49:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 639154 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 639154 ']' 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 639154 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 639154 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 639154' 00:24:40.856 killing process with pid 639154 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 639154 00:24:40.856 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 639154 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:41.115 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.374 rmmod nvme_tcp 00:24:41.374 rmmod nvme_fabrics 00:24:41.374 rmmod nvme_keyring 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 636227 ']' 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 636227 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 636227 ']' 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 636227 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 636227 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 636227' 00:24:41.374 killing process with pid 636227 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 636227 00:24:41.374 16:49:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 636227 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.633 16:49:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:43.534 00:24:43.534 real 0m37.290s 00:24:43.534 user 1m57.821s 00:24:43.534 sys 0m8.005s 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:43.534 ************************************ 00:24:43.534 END TEST nvmf_failover 00:24:43.534 ************************************ 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:43.534 16:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.793 ************************************ 00:24:43.793 START TEST nvmf_host_discovery 00:24:43.793 ************************************ 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:43.793 * Looking for test storage... 00:24:43.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:43.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.793 --rc genhtml_branch_coverage=1 00:24:43.793 --rc genhtml_function_coverage=1 00:24:43.793 --rc genhtml_legend=1 00:24:43.793 --rc geninfo_all_blocks=1 00:24:43.793 --rc geninfo_unexecuted_blocks=1 00:24:43.793 00:24:43.793 ' 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:43.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.793 --rc genhtml_branch_coverage=1 00:24:43.793 --rc genhtml_function_coverage=1 00:24:43.793 --rc genhtml_legend=1 00:24:43.793 --rc geninfo_all_blocks=1 00:24:43.793 --rc geninfo_unexecuted_blocks=1 00:24:43.793 00:24:43.793 ' 00:24:43.793 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.794 --rc genhtml_branch_coverage=1 00:24:43.794 --rc genhtml_function_coverage=1 00:24:43.794 --rc genhtml_legend=1 00:24:43.794 --rc geninfo_all_blocks=1 00:24:43.794 --rc geninfo_unexecuted_blocks=1 00:24:43.794 00:24:43.794 ' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:43.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.794 --rc genhtml_branch_coverage=1 00:24:43.794 --rc genhtml_function_coverage=1 00:24:43.794 --rc genhtml_legend=1 00:24:43.794 --rc geninfo_all_blocks=1 00:24:43.794 --rc geninfo_unexecuted_blocks=1 00:24:43.794 00:24:43.794 ' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.794 16:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.362 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.362 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.362 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.362 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.362 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:50.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:50.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:50.363 Found net devices under 0000:86:00.0: cvl_0_0 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:50.363 Found net devices under 0000:86:00.1: cvl_0_1 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:24:50.363 00:24:50.363 --- 10.0.0.2 ping statistics --- 00:24:50.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.363 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:24:50.363 00:24:50.363 --- 10.0.0.1 ping statistics --- 00:24:50.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.363 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=644397 00:24:50.363 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 644397 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 644397 ']' 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 [2024-10-14 16:49:54.383111] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:24:50.364 [2024-10-14 16:49:54.383164] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.364 [2024-10-14 16:49:54.459067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.364 [2024-10-14 16:49:54.498551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.364 [2024-10-14 16:49:54.498584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.364 [2024-10-14 16:49:54.498591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.364 [2024-10-14 16:49:54.498597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.364 [2024-10-14 16:49:54.498608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.364 [2024-10-14 16:49:54.499136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 [2024-10-14 16:49:54.641094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 [2024-10-14 16:49:54.653291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 null0 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 null1 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=644425 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 644425 /tmp/host.sock 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 644425 ']' 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:50.364 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 [2024-10-14 16:49:54.730646] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:24:50.364 [2024-10-14 16:49:54.730691] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644425 ] 00:24:50.364 [2024-10-14 16:49:54.797258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.364 [2024-10-14 16:49:54.839663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.364 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.623 16:49:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.624 [2024-10-14 16:49:55.250823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.624 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:50.882 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:50.883 16:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:51.449 [2024-10-14 16:49:55.998751] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.449 [2024-10-14 16:49:55.998770] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.449 [2024-10-14 16:49:55.998784] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.707 [2024-10-14 16:49:56.087039] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:51.707 [2024-10-14 16:49:56.312189] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:51.707 [2024-10-14 16:49:56.312208] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.965 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 [2024-10-14 16:49:56.750833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:52.224 [2024-10-14 16:49:56.751196] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:52.224 [2024-10-14 16:49:56.751217] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.224 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:52.225 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.483 [2024-10-14 16:49:56.879608] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:52.483 16:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:52.483 [2024-10-14 16:49:56.944133] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:52.483 [2024-10-14 16:49:56.944150] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:52.483 [2024-10-14 16:49:56.944155] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.418 16:49:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.418 [2024-10-14 16:49:58.014446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.418 [2024-10-14 16:49:58.014469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.418 [2024-10-14 16:49:58.014478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.418 [2024-10-14 16:49:58.014485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.418 [2024-10-14 16:49:58.014493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.418 [2024-10-14 16:49:58.014499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.418 [2024-10-14 16:49:58.014506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.418 [2024-10-14 16:49:58.014513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.418 [2024-10-14 16:49:58.014520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.418 [2024-10-14 16:49:58.015315] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:53.418 [2024-10-14 16:49:58.015328] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:53.418 [2024-10-14 16:49:58.024455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.418 [2024-10-14 16:49:58.034492] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.418 [2024-10-14 16:49:58.034706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.418 [2024-10-14 16:49:58.034722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.418 [2024-10-14 16:49:58.034731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.418 [2024-10-14 16:49:58.034742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.418 [2024-10-14 16:49:58.034753] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.418 [2024-10-14 16:49:58.034760] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.418 [2024-10-14 16:49:58.034768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.418 [2024-10-14 16:49:58.034778] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.418 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.418 [2024-10-14 16:49:58.044548] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.418 [2024-10-14 16:49:58.044831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.418 [2024-10-14 16:49:58.044844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.418 [2024-10-14 16:49:58.044851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.418 [2024-10-14 16:49:58.044861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.418 [2024-10-14 16:49:58.044871] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.418 [2024-10-14 16:49:58.044877] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.419 [2024-10-14 16:49:58.044884] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.419 [2024-10-14 16:49:58.044893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.676 [2024-10-14 16:49:58.054598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.676 [2024-10-14 16:49:58.054835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.676 [2024-10-14 16:49:58.054847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.676 [2024-10-14 16:49:58.054854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.676 [2024-10-14 16:49:58.054864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.676 [2024-10-14 16:49:58.054874] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.676 [2024-10-14 16:49:58.054884] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.676 [2024-10-14 16:49:58.054890] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.676 [2024-10-14 16:49:58.054900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.676 [2024-10-14 16:49:58.064653] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.676 [2024-10-14 16:49:58.064831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.676 [2024-10-14 16:49:58.064845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.676 [2024-10-14 16:49:58.064853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.676 [2024-10-14 16:49:58.064863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.676 [2024-10-14 16:49:58.064873] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.676 [2024-10-14 16:49:58.064879] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.676 [2024-10-14 16:49:58.064886] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.676 [2024-10-14 16:49:58.064896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.676 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.676 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.676 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:53.676 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:53.676 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.677 [2024-10-14 16:49:58.074707] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.677 [2024-10-14 16:49:58.074894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.677 [2024-10-14 16:49:58.074909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.677 [2024-10-14 16:49:58.074918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.677 [2024-10-14 16:49:58.074929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.677 [2024-10-14 16:49:58.074939] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.677 [2024-10-14 16:49:58.074946] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.677 [2024-10-14 16:49:58.074956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.677 [2024-10-14 16:49:58.074965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.677 [2024-10-14 16:49:58.084760] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.677 [2024-10-14 16:49:58.084946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.677 [2024-10-14 16:49:58.084959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.677 [2024-10-14 16:49:58.084966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.677 [2024-10-14 16:49:58.084976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.677 [2024-10-14 16:49:58.084993] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.677 [2024-10-14 16:49:58.084999] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.677 [2024-10-14 16:49:58.085006] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.677 [2024-10-14 16:49:58.085016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.677 [2024-10-14 16:49:58.094814] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.677 [2024-10-14 16:49:58.095067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.677 [2024-10-14 16:49:58.095079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231f450 with addr=10.0.0.2, port=4420 00:24:53.677 [2024-10-14 16:49:58.095086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231f450 is same with the state(6) to be set 00:24:53.677 [2024-10-14 16:49:58.095096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231f450 (9): Bad file descriptor 00:24:53.677 [2024-10-14 16:49:58.095111] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:53.677 [2024-10-14 16:49:58.095118] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:53.677 [2024-10-14 16:49:58.095125] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:53.677 [2024-10-14 16:49:58.095134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.677 [2024-10-14 16:49:58.102134] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:53.677 [2024-10-14 16:49:58.102149] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.677 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.935 16:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.868 [2024-10-14 16:49:59.408076] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:54.868 [2024-10-14 16:49:59.408092] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:54.868 [2024-10-14 16:49:59.408102] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:55.126 [2024-10-14 16:49:59.534486] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:55.385 [2024-10-14 16:49:59.795678] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:55.385 [2024-10-14 16:49:59.795703] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.385 request: 00:24:55.385 { 00:24:55.385 "name": "nvme", 00:24:55.385 "trtype": "tcp", 00:24:55.385 "traddr": "10.0.0.2", 00:24:55.385 "adrfam": "ipv4", 00:24:55.385 "trsvcid": "8009", 00:24:55.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:55.385 "wait_for_attach": true, 00:24:55.385 "method": "bdev_nvme_start_discovery", 00:24:55.385 "req_id": 1 00:24:55.385 } 00:24:55.385 Got JSON-RPC error response 00:24:55.385 response: 00:24:55.385 { 00:24:55.385 "code": -17, 00:24:55.385 "message": "File exists" 00:24:55.385 } 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:55.385 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.386 request: 00:24:55.386 { 00:24:55.386 "name": "nvme_second", 00:24:55.386 "trtype": "tcp", 00:24:55.386 "traddr": "10.0.0.2", 00:24:55.386 "adrfam": "ipv4", 00:24:55.386 "trsvcid": "8009", 00:24:55.386 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:55.386 "wait_for_attach": true, 00:24:55.386 "method": "bdev_nvme_start_discovery", 00:24:55.386 "req_id": 1 00:24:55.386 } 00:24:55.386 Got JSON-RPC error response 00:24:55.386 response: 00:24:55.386 { 00:24:55.386 "code": -17, 00:24:55.386 "message": "File exists" 00:24:55.386 } 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.386 16:49:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.386 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.644 16:50:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.576 [2024-10-14 16:50:01.039744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.576 [2024-10-14 16:50:01.039772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2350c90 with addr=10.0.0.2, port=8010 00:24:56.576 [2024-10-14 16:50:01.039788] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:56.576 [2024-10-14 16:50:01.039795] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:56.576 [2024-10-14 16:50:01.039801] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:57.510 [2024-10-14 16:50:02.042107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.510 [2024-10-14 16:50:02.042131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2350c90 with addr=10.0.0.2, port=8010 00:24:57.510 [2024-10-14 16:50:02.042143] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:57.510 [2024-10-14 16:50:02.042149] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:57.510 [2024-10-14 16:50:02.042155] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:58.443 [2024-10-14 16:50:03.044341] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:58.443 request: 00:24:58.443 { 00:24:58.443 "name": "nvme_second", 00:24:58.443 "trtype": "tcp", 00:24:58.443 "traddr": "10.0.0.2", 00:24:58.443 "adrfam": "ipv4", 00:24:58.443 "trsvcid": "8010", 00:24:58.443 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:58.443 "wait_for_attach": false, 00:24:58.443 "attach_timeout_ms": 3000, 00:24:58.443 "method": "bdev_nvme_start_discovery", 00:24:58.443 "req_id": 1 00:24:58.443 } 00:24:58.443 Got JSON-RPC error response 00:24:58.443 response: 00:24:58.443 { 00:24:58.443 "code": -110, 00:24:58.443 "message": "Connection timed out" 00:24:58.443 } 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:58.443 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 644425 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.702 rmmod nvme_tcp 00:24:58.702 rmmod nvme_fabrics 00:24:58.702 rmmod nvme_keyring 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 644397 ']' 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 644397 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 644397 ']' 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 644397 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 644397 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 644397' 00:24:58.702 killing process with pid 644397 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 644397 00:24:58.702 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 644397 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.961 16:50:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.864 00:25:00.864 real 0m17.250s 00:25:00.864 user 0m20.649s 00:25:00.864 sys 0m5.764s 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.864 ************************************ 00:25:00.864 END TEST nvmf_host_discovery 00:25:00.864 ************************************ 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.864 16:50:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.124 ************************************ 00:25:01.124 START TEST nvmf_host_multipath_status 00:25:01.124 ************************************ 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:01.124 * Looking for test storage... 00:25:01.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.124 --rc genhtml_branch_coverage=1 00:25:01.124 --rc genhtml_function_coverage=1 00:25:01.124 --rc genhtml_legend=1 00:25:01.124 --rc geninfo_all_blocks=1 00:25:01.124 --rc geninfo_unexecuted_blocks=1 00:25:01.124 00:25:01.124 ' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.124 --rc genhtml_branch_coverage=1 00:25:01.124 --rc genhtml_function_coverage=1 00:25:01.124 --rc genhtml_legend=1 00:25:01.124 --rc geninfo_all_blocks=1 00:25:01.124 --rc geninfo_unexecuted_blocks=1 00:25:01.124 00:25:01.124 ' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.124 --rc genhtml_branch_coverage=1 00:25:01.124 --rc genhtml_function_coverage=1 00:25:01.124 --rc genhtml_legend=1 00:25:01.124 --rc geninfo_all_blocks=1 00:25:01.124 --rc geninfo_unexecuted_blocks=1 00:25:01.124 00:25:01.124 ' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:01.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.124 --rc genhtml_branch_coverage=1 00:25:01.124 --rc genhtml_function_coverage=1 00:25:01.124 --rc genhtml_legend=1 00:25:01.124 --rc geninfo_all_blocks=1 00:25:01.124 --rc geninfo_unexecuted_blocks=1 00:25:01.124 00:25:01.124 ' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.124 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.125 16:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.694 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:07.695 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:07.695 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:07.695 Found net devices under 0000:86:00.0: cvl_0_0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:07.695 Found net devices under 0000:86:00.1: cvl_0_1 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:07.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:25:07.695 00:25:07.695 --- 10.0.0.2 ping statistics --- 00:25:07.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.695 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:07.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:25:07.695 00:25:07.695 --- 10.0.0.1 ping statistics --- 00:25:07.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.695 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=649493 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 649493 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 649493 ']' 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:07.695 [2024-10-14 16:50:11.691206] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:25:07.695 [2024-10-14 16:50:11.691247] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.695 [2024-10-14 16:50:11.748818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:07.695 [2024-10-14 16:50:11.790900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.695 [2024-10-14 16:50:11.790935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.695 [2024-10-14 16:50:11.790942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.695 [2024-10-14 16:50:11.790948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.695 [2024-10-14 16:50:11.790953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.695 [2024-10-14 16:50:11.792138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.695 [2024-10-14 16:50:11.792147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=649493 00:25:07.695 16:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:07.695 [2024-10-14 16:50:12.083237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.695 16:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:07.695 Malloc0 00:25:07.954 16:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:07.954 16:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:08.212 16:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:08.470 [2024-10-14 16:50:12.891071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.470 16:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:08.470 [2024-10-14 16:50:13.079510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=649747 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 649747 /var/tmp/bdevperf.sock 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 649747 ']' 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:08.727 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:08.985 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:09.550 Nvme0n1 00:25:09.550 16:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:09.807 Nvme0n1 00:25:09.807 16:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:09.807 16:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:12.332 16:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:12.332 16:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:12.332 16:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:12.332 16:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:13.265 16:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:13.265 16:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.265 16:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.265 16:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.522 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.522 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.522 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.522 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.779 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.779 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.779 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.779 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.037 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.294 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.294 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:14.294 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.294 16:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.551 16:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.551 16:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:14.551 16:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:14.808 16:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:15.066 16:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:15.999 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:15.999 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:15.999 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.999 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:16.256 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.256 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:16.256 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.256 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:16.514 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.514 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:16.514 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.514 16:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.514 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.514 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.514 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.514 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.773 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.773 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.773 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.773 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:17.031 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.031 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:17.031 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.031 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:17.288 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:17.288 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:17.289 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:17.546 16:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:17.546 16:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.918 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.175 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.432 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.432 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:19.432 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.432 16:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.690 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.690 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.690 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.691 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.948 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.949 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:19.949 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:20.206 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:20.519 16:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:21.502 16:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:21.502 16:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:21.502 16:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.502 16:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.502 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.502 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:21.502 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.502 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.761 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.761 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.761 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.761 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.019 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.019 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.019 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.019 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.276 16:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.534 16:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.534 16:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:22.534 16:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:22.791 16:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:23.049 16:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:23.981 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:23.981 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:23.981 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.981 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.239 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.239 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:24.239 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.239 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.496 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.496 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.496 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.496 16:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.496 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.496 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.496 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.496 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.754 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.754 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:24.754 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.754 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.011 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.011 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:25.011 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.011 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.268 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.268 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:25.268 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:25.525 16:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:25.525 16:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:26.899 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.157 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.414 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.414 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:27.414 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.414 16:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.671 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.671 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.671 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.671 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.929 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.929 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:28.186 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:28.186 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:28.186 16:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:28.444 16:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.817 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.075 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.333 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.333 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.333 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.333 16:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.591 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.591 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.591 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.591 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.850 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.850 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:30.850 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.850 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:31.108 16:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:32.040 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:32.040 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:32.040 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.040 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.298 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.298 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.298 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.298 16:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.556 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.556 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.556 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.556 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.814 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.814 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.814 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.814 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.071 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.329 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.329 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:33.329 16:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.586 16:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:33.843 16:50:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:34.777 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:34.777 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.777 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.777 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.035 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.035 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:35.035 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.035 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.293 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.293 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.293 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.293 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.550 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.550 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.550 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.550 16:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.808 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.066 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.066 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:36.066 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.324 16:50:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:36.582 16:50:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:37.514 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:37.514 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.514 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.514 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.771 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.771 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:37.771 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.771 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:38.028 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.028 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:38.028 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.028 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.285 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.286 16:50:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.542 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.542 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:38.543 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.543 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 649747 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 649747 ']' 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 649747 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 649747 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 649747' 00:25:38.800 killing process with pid 649747 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 649747 00:25:38.800 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 649747 00:25:38.800 { 00:25:38.800 "results": [ 00:25:38.800 { 00:25:38.800 "job": "Nvme0n1", 00:25:38.800 "core_mask": "0x4", 00:25:38.800 "workload": "verify", 00:25:38.800 "status": "terminated", 00:25:38.800 "verify_range": { 00:25:38.800 "start": 0, 00:25:38.800 "length": 16384 00:25:38.800 }, 00:25:38.800 "queue_depth": 128, 00:25:38.800 "io_size": 4096, 00:25:38.800 "runtime": 28.836669, 00:25:38.800 "iops": 10724.886428456768, 00:25:38.800 "mibps": 41.89408761115925, 00:25:38.800 "io_failed": 0, 00:25:38.800 "io_timeout": 0, 00:25:38.800 "avg_latency_us": 11915.309420925158, 00:25:38.800 "min_latency_us": 335.4819047619048, 00:25:38.800 "max_latency_us": 3019898.88 00:25:38.800 } 00:25:38.800 ], 00:25:38.800 "core_count": 1 00:25:38.800 } 00:25:39.062 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 649747 00:25:39.062 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.062 [2024-10-14 16:50:13.154313] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:25:39.062 [2024-10-14 16:50:13.154363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649747 ] 00:25:39.062 [2024-10-14 16:50:13.221445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.062 [2024-10-14 16:50:13.261643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:39.062 Running I/O for 90 seconds... 00:25:39.062 11464.00 IOPS, 44.78 MiB/s [2024-10-14T14:50:43.696Z] 11554.50 IOPS, 45.13 MiB/s [2024-10-14T14:50:43.696Z] 11509.00 IOPS, 44.96 MiB/s [2024-10-14T14:50:43.696Z] 11532.25 IOPS, 45.05 MiB/s [2024-10-14T14:50:43.696Z] 11530.60 IOPS, 45.04 MiB/s [2024-10-14T14:50:43.696Z] 11504.33 IOPS, 44.94 MiB/s [2024-10-14T14:50:43.696Z] 11501.71 IOPS, 44.93 MiB/s [2024-10-14T14:50:43.696Z] 11498.25 IOPS, 44.92 MiB/s [2024-10-14T14:50:43.696Z] 11499.44 IOPS, 44.92 MiB/s [2024-10-14T14:50:43.696Z] 11521.50 IOPS, 45.01 MiB/s [2024-10-14T14:50:43.696Z] 11518.00 IOPS, 44.99 MiB/s [2024-10-14T14:50:43.696Z] 11527.75 IOPS, 45.03 MiB/s [2024-10-14T14:50:43.696Z] [2024-10-14 16:50:27.282109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.282982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.282995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.283002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.283014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.062 [2024-10-14 16:50:27.283021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:39.062 [2024-10-14 16:50:27.283034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.063 [2024-10-14 16:50:27.283822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:39.063 [2024-10-14 16:50:27.283835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.283841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.283854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.283861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.283873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.283880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.283895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.283902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.283914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.283922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.283936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.283943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.283956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.283963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.064 [2024-10-14 16:50:27.284466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:39.064 [2024-10-14 16:50:27.284857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.064 [2024-10-14 16:50:27.284864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.284879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.284888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.284903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.284910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.284926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.284933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.284949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.284956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.284972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.284978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.284994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:27.285162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:27.285321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.065 [2024-10-14 16:50:27.285328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:39.065 11337.00 IOPS, 44.29 MiB/s [2024-10-14T14:50:43.699Z] 10527.21 IOPS, 41.12 MiB/s [2024-10-14T14:50:43.699Z] 9825.40 IOPS, 38.38 MiB/s [2024-10-14T14:50:43.699Z] 9365.81 IOPS, 36.59 MiB/s [2024-10-14T14:50:43.699Z] 9492.53 IOPS, 37.08 MiB/s [2024-10-14T14:50:43.699Z] 9596.67 IOPS, 37.49 MiB/s [2024-10-14T14:50:43.699Z] 9781.74 IOPS, 38.21 MiB/s [2024-10-14T14:50:43.699Z] 9976.70 IOPS, 38.97 MiB/s [2024-10-14T14:50:43.699Z] 10150.14 IOPS, 39.65 MiB/s [2024-10-14T14:50:43.699Z] 10210.09 IOPS, 39.88 MiB/s [2024-10-14T14:50:43.699Z] 10267.78 IOPS, 40.11 MiB/s [2024-10-14T14:50:43.699Z] 10336.83 IOPS, 40.38 MiB/s [2024-10-14T14:50:43.699Z] 10476.76 IOPS, 40.92 MiB/s [2024-10-14T14:50:43.699Z] 10593.31 IOPS, 41.38 MiB/s [2024-10-14T14:50:43.699Z] [2024-10-14 16:50:41.053776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.053986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.053993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:39.065 [2024-10-14 16:50:41.054182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.065 [2024-10-14 16:50:41.054189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.054202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.054209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.054221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.054228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.054241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.054248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.054261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.054268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.055983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.055996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.066 [2024-10-14 16:50:41.056491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:39.066 [2024-10-14 16:50:41.056606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.066 [2024-10-14 16:50:41.056613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:39.066 10671.89 IOPS, 41.69 MiB/s [2024-10-14T14:50:43.700Z] 10704.61 IOPS, 41.81 MiB/s [2024-10-14T14:50:43.701Z] Received shutdown signal, test time was about 28.837327 seconds 00:25:39.067 00:25:39.067 Latency(us) 00:25:39.067 [2024-10-14T14:50:43.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.067 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:39.067 Verification LBA range: start 0x0 length 0x4000 00:25:39.067 Nvme0n1 : 28.84 10724.89 41.89 0.00 0.00 11915.31 335.48 3019898.88 00:25:39.067 [2024-10-14T14:50:43.701Z] =================================================================================================================== 00:25:39.067 [2024-10-14T14:50:43.701Z] Total : 10724.89 41.89 0.00 0.00 11915.31 335.48 3019898.88 00:25:39.067 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.325 rmmod nvme_tcp 00:25:39.325 rmmod nvme_fabrics 00:25:39.325 rmmod nvme_keyring 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 649493 ']' 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 649493 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 649493 ']' 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 649493 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 649493 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 649493' 00:25:39.325 killing process with pid 649493 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 649493 00:25:39.325 16:50:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 649493 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.585 16:50:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.489 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:41.489 00:25:41.489 real 0m40.597s 00:25:41.489 user 1m50.268s 00:25:41.489 sys 0m11.490s 00:25:41.489 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:41.489 16:50:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:41.489 ************************************ 00:25:41.489 END TEST nvmf_host_multipath_status 00:25:41.489 ************************************ 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.749 ************************************ 00:25:41.749 START TEST nvmf_discovery_remove_ifc 00:25:41.749 ************************************ 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:41.749 * Looking for test storage... 00:25:41.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.749 --rc genhtml_branch_coverage=1 00:25:41.749 --rc genhtml_function_coverage=1 00:25:41.749 --rc genhtml_legend=1 00:25:41.749 --rc geninfo_all_blocks=1 00:25:41.749 --rc geninfo_unexecuted_blocks=1 00:25:41.749 00:25:41.749 ' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.749 --rc genhtml_branch_coverage=1 00:25:41.749 --rc genhtml_function_coverage=1 00:25:41.749 --rc genhtml_legend=1 00:25:41.749 --rc geninfo_all_blocks=1 00:25:41.749 --rc geninfo_unexecuted_blocks=1 00:25:41.749 00:25:41.749 ' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.749 --rc genhtml_branch_coverage=1 00:25:41.749 --rc genhtml_function_coverage=1 00:25:41.749 --rc genhtml_legend=1 00:25:41.749 --rc geninfo_all_blocks=1 00:25:41.749 --rc geninfo_unexecuted_blocks=1 00:25:41.749 00:25:41.749 ' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:41.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.749 --rc genhtml_branch_coverage=1 00:25:41.749 --rc genhtml_function_coverage=1 00:25:41.749 --rc genhtml_legend=1 00:25:41.749 --rc geninfo_all_blocks=1 00:25:41.749 --rc geninfo_unexecuted_blocks=1 00:25:41.749 00:25:41.749 ' 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.749 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.750 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:42.009 16:50:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:48.575 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:48.576 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:48.576 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:48.576 Found net devices under 0000:86:00.0: cvl_0_0 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:48.576 Found net devices under 0000:86:00.1: cvl_0_1 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:48.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:25:48.576 00:25:48.576 --- 10.0.0.2 ping statistics --- 00:25:48.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.576 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:25:48.576 00:25:48.576 --- 10.0.0.1 ping statistics --- 00:25:48.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.576 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=658493 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 658493 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 658493 ']' 00:25:48.576 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 [2024-10-14 16:50:52.367320] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:25:48.577 [2024-10-14 16:50:52.367365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.577 [2024-10-14 16:50:52.440881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.577 [2024-10-14 16:50:52.482134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.577 [2024-10-14 16:50:52.482166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.577 [2024-10-14 16:50:52.482173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.577 [2024-10-14 16:50:52.482179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.577 [2024-10-14 16:50:52.482184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.577 [2024-10-14 16:50:52.482740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 [2024-10-14 16:50:52.633771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.577 [2024-10-14 16:50:52.641957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:48.577 null0 00:25:48.577 [2024-10-14 16:50:52.673929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=658524 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 658524 /tmp/host.sock 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 658524 ']' 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:48.577 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 [2024-10-14 16:50:52.742376] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:25:48.577 [2024-10-14 16:50:52.742415] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658524 ] 00:25:48.577 [2024-10-14 16:50:52.808076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.577 [2024-10-14 16:50:52.848548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.577 16:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.512 [2024-10-14 16:50:53.999979] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:49.512 [2024-10-14 16:50:53.999998] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:49.512 [2024-10-14 16:50:54.000013] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:49.512 [2024-10-14 16:50:54.128410] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:49.771 [2024-10-14 16:50:54.192081] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:49.771 [2024-10-14 16:50:54.192123] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:49.771 [2024-10-14 16:50:54.192145] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:49.771 [2024-10-14 16:50:54.192157] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:49.771 [2024-10-14 16:50:54.192174] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.771 [2024-10-14 16:50:54.199012] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x201ca50 was disconnected and freed. delete nvme_qpair. 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:49.771 16:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:51.194 16:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:52.131 16:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:53.066 16:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.001 16:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.377 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.377 [2024-10-14 16:50:59.633749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:55.377 [2024-10-14 16:50:59.633783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.378 [2024-10-14 16:50:59.633793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.378 [2024-10-14 16:50:59.633801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.378 [2024-10-14 16:50:59.633808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.378 [2024-10-14 16:50:59.633815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.378 [2024-10-14 16:50:59.633821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.378 [2024-10-14 16:50:59.633828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.378 [2024-10-14 16:50:59.633835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.378 [2024-10-14 16:50:59.633842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.378 [2024-10-14 16:50:59.633848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.378 [2024-10-14 16:50:59.633855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff92e0 is same with the state(6) to be set 00:25:55.378 [2024-10-14 16:50:59.643771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff92e0 (9): Bad file descriptor 00:25:55.378 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.378 16:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.378 [2024-10-14 16:50:59.653809] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.314 [2024-10-14 16:51:00.701650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:56.314 [2024-10-14 16:51:00.701727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff92e0 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-14 16:51:00.701760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff92e0 is same with the state(6) to be set 00:25:56.314 [2024-10-14 16:51:00.701820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff92e0 (9): Bad file descriptor 00:25:56.314 [2024-10-14 16:51:00.702791] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:56.314 [2024-10-14 16:51:00.702858] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:56.314 [2024-10-14 16:51:00.702882] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:56.314 [2024-10-14 16:51:00.702903] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:56.314 [2024-10-14 16:51:00.702968] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-14 16:51:00.702993] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.314 16:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.250 [2024-10-14 16:51:01.705486] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.250 [2024-10-14 16:51:01.705512] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.250 [2024-10-14 16:51:01.705519] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.250 [2024-10-14 16:51:01.705528] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:57.250 [2024-10-14 16:51:01.705541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.250 [2024-10-14 16:51:01.705558] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:57.250 [2024-10-14 16:51:01.705583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.250 [2024-10-14 16:51:01.705592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.250 [2024-10-14 16:51:01.705607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.250 [2024-10-14 16:51:01.705614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.250 [2024-10-14 16:51:01.705621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.250 [2024-10-14 16:51:01.705628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.250 [2024-10-14 16:51:01.705635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.250 [2024-10-14 16:51:01.705641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.250 [2024-10-14 16:51:01.705649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.250 [2024-10-14 16:51:01.705655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.250 [2024-10-14 16:51:01.705661] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:57.250 [2024-10-14 16:51:01.706089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe89c0 (9): Bad file descriptor 00:25:57.250 [2024-10-14 16:51:01.707101] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:57.250 [2024-10-14 16:51:01.707111] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.250 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.509 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:57.509 16:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:58.445 16:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.382 [2024-10-14 16:51:03.765113] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.382 [2024-10-14 16:51:03.765130] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.382 [2024-10-14 16:51:03.765143] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.382 [2024-10-14 16:51:03.891563] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:59.382 16:51:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.382 [2024-10-14 16:51:03.988832] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:59.382 [2024-10-14 16:51:03.988867] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:59.382 [2024-10-14 16:51:03.988884] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:59.382 [2024-10-14 16:51:03.988897] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:59.382 [2024-10-14 16:51:03.988903] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:59.382 [2024-10-14 16:51:03.994143] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ff49f0 was disconnected and freed. delete nvme_qpair. 00:26:00.759 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.760 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.760 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.760 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.760 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.760 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.760 16:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 658524 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 658524 ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 658524 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658524 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658524' 00:26:00.760 killing process with pid 658524 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 658524 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 658524 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.760 rmmod nvme_tcp 00:26:00.760 rmmod nvme_fabrics 00:26:00.760 rmmod nvme_keyring 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 658493 ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 658493 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 658493 ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 658493 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 658493 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 658493' 00:26:00.760 killing process with pid 658493 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 658493 00:26:00.760 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 658493 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:01.018 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:01.019 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.019 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.019 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.019 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.019 16:51:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:03.556 00:26:03.556 real 0m21.408s 00:26:03.556 user 0m26.642s 00:26:03.556 sys 0m5.808s 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.556 ************************************ 00:26:03.556 END TEST nvmf_discovery_remove_ifc 00:26:03.556 ************************************ 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.556 ************************************ 00:26:03.556 START TEST nvmf_identify_kernel_target 00:26:03.556 ************************************ 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:03.556 * Looking for test storage... 00:26:03.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.556 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.557 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.558 --rc genhtml_branch_coverage=1 00:26:03.558 --rc genhtml_function_coverage=1 00:26:03.558 --rc genhtml_legend=1 00:26:03.558 --rc geninfo_all_blocks=1 00:26:03.558 --rc geninfo_unexecuted_blocks=1 00:26:03.558 00:26:03.558 ' 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.558 --rc genhtml_branch_coverage=1 00:26:03.558 --rc genhtml_function_coverage=1 00:26:03.558 --rc genhtml_legend=1 00:26:03.558 --rc geninfo_all_blocks=1 00:26:03.558 --rc geninfo_unexecuted_blocks=1 00:26:03.558 00:26:03.558 ' 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.558 --rc genhtml_branch_coverage=1 00:26:03.558 --rc genhtml_function_coverage=1 00:26:03.558 --rc genhtml_legend=1 00:26:03.558 --rc geninfo_all_blocks=1 00:26:03.558 --rc geninfo_unexecuted_blocks=1 00:26:03.558 00:26:03.558 ' 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:03.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.558 --rc genhtml_branch_coverage=1 00:26:03.558 --rc genhtml_function_coverage=1 00:26:03.558 --rc genhtml_legend=1 00:26:03.558 --rc geninfo_all_blocks=1 00:26:03.558 --rc geninfo_unexecuted_blocks=1 00:26:03.558 00:26:03.558 ' 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.558 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.559 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:03.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.560 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.561 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.561 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:03.561 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:03.561 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:03.561 16:51:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.130 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:10.131 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:10.131 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:10.131 Found net devices under 0000:86:00.0: cvl_0_0 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:10.131 Found net devices under 0000:86:00.1: cvl_0_1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:26:10.131 00:26:10.131 --- 10.0.0.2 ping statistics --- 00:26:10.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.131 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:10.131 00:26:10.131 --- 10.0.0.1 ping statistics --- 00:26:10.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.131 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:10.131 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:10.132 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:10.132 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:10.132 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:10.132 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:10.132 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:10.132 16:51:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:12.037 Waiting for block devices as requested 00:26:12.037 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:12.296 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:12.296 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:12.296 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:12.556 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:12.556 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:12.556 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:12.556 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:12.815 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:12.815 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:12.815 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:13.073 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:13.073 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:13.073 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:13.073 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:13.332 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:13.332 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:13.332 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:13.332 No valid GPT data, bailing 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:13.590 16:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:13.590 00:26:13.590 Discovery Log Number of Records 2, Generation counter 2 00:26:13.590 =====Discovery Log Entry 0====== 00:26:13.590 trtype: tcp 00:26:13.590 adrfam: ipv4 00:26:13.590 subtype: current discovery subsystem 00:26:13.590 treq: not specified, sq flow control disable supported 00:26:13.590 portid: 1 00:26:13.590 trsvcid: 4420 00:26:13.590 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:13.590 traddr: 10.0.0.1 00:26:13.590 eflags: none 00:26:13.590 sectype: none 00:26:13.590 =====Discovery Log Entry 1====== 00:26:13.590 trtype: tcp 00:26:13.590 adrfam: ipv4 00:26:13.590 subtype: nvme subsystem 00:26:13.590 treq: not specified, sq flow control disable supported 00:26:13.590 portid: 1 00:26:13.590 trsvcid: 4420 00:26:13.590 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:13.590 traddr: 10.0.0.1 00:26:13.590 eflags: none 00:26:13.590 sectype: none 00:26:13.590 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:13.590 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:13.590 ===================================================== 00:26:13.590 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:13.590 ===================================================== 00:26:13.590 Controller Capabilities/Features 00:26:13.590 ================================ 00:26:13.590 Vendor ID: 0000 00:26:13.591 Subsystem Vendor ID: 0000 00:26:13.591 Serial Number: 6ebcc49971d6484e62ec 00:26:13.591 Model Number: Linux 00:26:13.591 Firmware Version: 6.8.9-20 00:26:13.591 Recommended Arb Burst: 0 00:26:13.591 IEEE OUI Identifier: 00 00 00 00:26:13.591 Multi-path I/O 00:26:13.591 May have multiple subsystem ports: No 00:26:13.591 May have multiple controllers: No 00:26:13.591 Associated with SR-IOV VF: No 00:26:13.591 Max Data Transfer Size: Unlimited 00:26:13.591 Max Number of Namespaces: 0 00:26:13.591 Max Number of I/O Queues: 1024 00:26:13.591 NVMe Specification Version (VS): 1.3 00:26:13.591 NVMe Specification Version (Identify): 1.3 00:26:13.591 Maximum Queue Entries: 1024 00:26:13.591 Contiguous Queues Required: No 00:26:13.591 Arbitration Mechanisms Supported 00:26:13.591 Weighted Round Robin: Not Supported 00:26:13.591 Vendor Specific: Not Supported 00:26:13.591 Reset Timeout: 7500 ms 00:26:13.591 Doorbell Stride: 4 bytes 00:26:13.591 NVM Subsystem Reset: Not Supported 00:26:13.591 Command Sets Supported 00:26:13.591 NVM Command Set: Supported 00:26:13.591 Boot Partition: Not Supported 00:26:13.591 Memory Page Size Minimum: 4096 bytes 00:26:13.591 Memory Page Size Maximum: 4096 bytes 00:26:13.591 Persistent Memory Region: Not Supported 00:26:13.591 Optional Asynchronous Events Supported 00:26:13.591 Namespace Attribute Notices: Not Supported 00:26:13.591 Firmware Activation Notices: Not Supported 00:26:13.591 ANA Change Notices: Not Supported 00:26:13.591 PLE Aggregate Log Change Notices: Not Supported 00:26:13.591 LBA Status Info Alert Notices: Not Supported 00:26:13.591 EGE Aggregate Log Change Notices: Not Supported 00:26:13.591 Normal NVM Subsystem Shutdown event: Not Supported 00:26:13.591 Zone Descriptor Change Notices: Not Supported 00:26:13.591 Discovery Log Change Notices: Supported 00:26:13.591 Controller Attributes 00:26:13.591 128-bit Host Identifier: Not Supported 00:26:13.591 Non-Operational Permissive Mode: Not Supported 00:26:13.591 NVM Sets: Not Supported 00:26:13.591 Read Recovery Levels: Not Supported 00:26:13.591 Endurance Groups: Not Supported 00:26:13.591 Predictable Latency Mode: Not Supported 00:26:13.591 Traffic Based Keep ALive: Not Supported 00:26:13.591 Namespace Granularity: Not Supported 00:26:13.591 SQ Associations: Not Supported 00:26:13.591 UUID List: Not Supported 00:26:13.591 Multi-Domain Subsystem: Not Supported 00:26:13.591 Fixed Capacity Management: Not Supported 00:26:13.591 Variable Capacity Management: Not Supported 00:26:13.591 Delete Endurance Group: Not Supported 00:26:13.591 Delete NVM Set: Not Supported 00:26:13.591 Extended LBA Formats Supported: Not Supported 00:26:13.591 Flexible Data Placement Supported: Not Supported 00:26:13.591 00:26:13.591 Controller Memory Buffer Support 00:26:13.591 ================================ 00:26:13.591 Supported: No 00:26:13.591 00:26:13.591 Persistent Memory Region Support 00:26:13.591 ================================ 00:26:13.591 Supported: No 00:26:13.591 00:26:13.591 Admin Command Set Attributes 00:26:13.591 ============================ 00:26:13.591 Security Send/Receive: Not Supported 00:26:13.591 Format NVM: Not Supported 00:26:13.591 Firmware Activate/Download: Not Supported 00:26:13.591 Namespace Management: Not Supported 00:26:13.591 Device Self-Test: Not Supported 00:26:13.591 Directives: Not Supported 00:26:13.591 NVMe-MI: Not Supported 00:26:13.591 Virtualization Management: Not Supported 00:26:13.591 Doorbell Buffer Config: Not Supported 00:26:13.591 Get LBA Status Capability: Not Supported 00:26:13.591 Command & Feature Lockdown Capability: Not Supported 00:26:13.591 Abort Command Limit: 1 00:26:13.591 Async Event Request Limit: 1 00:26:13.591 Number of Firmware Slots: N/A 00:26:13.591 Firmware Slot 1 Read-Only: N/A 00:26:13.591 Firmware Activation Without Reset: N/A 00:26:13.591 Multiple Update Detection Support: N/A 00:26:13.591 Firmware Update Granularity: No Information Provided 00:26:13.591 Per-Namespace SMART Log: No 00:26:13.591 Asymmetric Namespace Access Log Page: Not Supported 00:26:13.591 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:13.591 Command Effects Log Page: Not Supported 00:26:13.591 Get Log Page Extended Data: Supported 00:26:13.591 Telemetry Log Pages: Not Supported 00:26:13.591 Persistent Event Log Pages: Not Supported 00:26:13.591 Supported Log Pages Log Page: May Support 00:26:13.591 Commands Supported & Effects Log Page: Not Supported 00:26:13.591 Feature Identifiers & Effects Log Page:May Support 00:26:13.591 NVMe-MI Commands & Effects Log Page: May Support 00:26:13.591 Data Area 4 for Telemetry Log: Not Supported 00:26:13.591 Error Log Page Entries Supported: 1 00:26:13.591 Keep Alive: Not Supported 00:26:13.591 00:26:13.591 NVM Command Set Attributes 00:26:13.591 ========================== 00:26:13.591 Submission Queue Entry Size 00:26:13.591 Max: 1 00:26:13.591 Min: 1 00:26:13.591 Completion Queue Entry Size 00:26:13.591 Max: 1 00:26:13.591 Min: 1 00:26:13.591 Number of Namespaces: 0 00:26:13.591 Compare Command: Not Supported 00:26:13.591 Write Uncorrectable Command: Not Supported 00:26:13.591 Dataset Management Command: Not Supported 00:26:13.591 Write Zeroes Command: Not Supported 00:26:13.591 Set Features Save Field: Not Supported 00:26:13.591 Reservations: Not Supported 00:26:13.591 Timestamp: Not Supported 00:26:13.591 Copy: Not Supported 00:26:13.591 Volatile Write Cache: Not Present 00:26:13.591 Atomic Write Unit (Normal): 1 00:26:13.591 Atomic Write Unit (PFail): 1 00:26:13.591 Atomic Compare & Write Unit: 1 00:26:13.591 Fused Compare & Write: Not Supported 00:26:13.591 Scatter-Gather List 00:26:13.591 SGL Command Set: Supported 00:26:13.591 SGL Keyed: Not Supported 00:26:13.591 SGL Bit Bucket Descriptor: Not Supported 00:26:13.591 SGL Metadata Pointer: Not Supported 00:26:13.591 Oversized SGL: Not Supported 00:26:13.591 SGL Metadata Address: Not Supported 00:26:13.591 SGL Offset: Supported 00:26:13.591 Transport SGL Data Block: Not Supported 00:26:13.591 Replay Protected Memory Block: Not Supported 00:26:13.591 00:26:13.591 Firmware Slot Information 00:26:13.591 ========================= 00:26:13.591 Active slot: 0 00:26:13.591 00:26:13.591 00:26:13.591 Error Log 00:26:13.591 ========= 00:26:13.591 00:26:13.591 Active Namespaces 00:26:13.591 ================= 00:26:13.591 Discovery Log Page 00:26:13.591 ================== 00:26:13.591 Generation Counter: 2 00:26:13.591 Number of Records: 2 00:26:13.591 Record Format: 0 00:26:13.591 00:26:13.591 Discovery Log Entry 0 00:26:13.591 ---------------------- 00:26:13.591 Transport Type: 3 (TCP) 00:26:13.591 Address Family: 1 (IPv4) 00:26:13.591 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:13.591 Entry Flags: 00:26:13.591 Duplicate Returned Information: 0 00:26:13.591 Explicit Persistent Connection Support for Discovery: 0 00:26:13.591 Transport Requirements: 00:26:13.591 Secure Channel: Not Specified 00:26:13.591 Port ID: 1 (0x0001) 00:26:13.591 Controller ID: 65535 (0xffff) 00:26:13.591 Admin Max SQ Size: 32 00:26:13.591 Transport Service Identifier: 4420 00:26:13.591 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:13.591 Transport Address: 10.0.0.1 00:26:13.591 Discovery Log Entry 1 00:26:13.591 ---------------------- 00:26:13.591 Transport Type: 3 (TCP) 00:26:13.591 Address Family: 1 (IPv4) 00:26:13.591 Subsystem Type: 2 (NVM Subsystem) 00:26:13.591 Entry Flags: 00:26:13.591 Duplicate Returned Information: 0 00:26:13.591 Explicit Persistent Connection Support for Discovery: 0 00:26:13.591 Transport Requirements: 00:26:13.591 Secure Channel: Not Specified 00:26:13.591 Port ID: 1 (0x0001) 00:26:13.591 Controller ID: 65535 (0xffff) 00:26:13.591 Admin Max SQ Size: 32 00:26:13.591 Transport Service Identifier: 4420 00:26:13.591 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:13.591 Transport Address: 10.0.0.1 00:26:13.591 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:13.851 get_feature(0x01) failed 00:26:13.851 get_feature(0x02) failed 00:26:13.851 get_feature(0x04) failed 00:26:13.851 ===================================================== 00:26:13.851 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:13.851 ===================================================== 00:26:13.851 Controller Capabilities/Features 00:26:13.851 ================================ 00:26:13.851 Vendor ID: 0000 00:26:13.851 Subsystem Vendor ID: 0000 00:26:13.851 Serial Number: 6d5022f5145c3ff5b5e6 00:26:13.851 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:13.851 Firmware Version: 6.8.9-20 00:26:13.851 Recommended Arb Burst: 6 00:26:13.851 IEEE OUI Identifier: 00 00 00 00:26:13.851 Multi-path I/O 00:26:13.851 May have multiple subsystem ports: Yes 00:26:13.851 May have multiple controllers: Yes 00:26:13.851 Associated with SR-IOV VF: No 00:26:13.851 Max Data Transfer Size: Unlimited 00:26:13.851 Max Number of Namespaces: 1024 00:26:13.851 Max Number of I/O Queues: 128 00:26:13.851 NVMe Specification Version (VS): 1.3 00:26:13.851 NVMe Specification Version (Identify): 1.3 00:26:13.851 Maximum Queue Entries: 1024 00:26:13.851 Contiguous Queues Required: No 00:26:13.851 Arbitration Mechanisms Supported 00:26:13.851 Weighted Round Robin: Not Supported 00:26:13.851 Vendor Specific: Not Supported 00:26:13.851 Reset Timeout: 7500 ms 00:26:13.851 Doorbell Stride: 4 bytes 00:26:13.851 NVM Subsystem Reset: Not Supported 00:26:13.851 Command Sets Supported 00:26:13.851 NVM Command Set: Supported 00:26:13.851 Boot Partition: Not Supported 00:26:13.851 Memory Page Size Minimum: 4096 bytes 00:26:13.851 Memory Page Size Maximum: 4096 bytes 00:26:13.851 Persistent Memory Region: Not Supported 00:26:13.851 Optional Asynchronous Events Supported 00:26:13.851 Namespace Attribute Notices: Supported 00:26:13.851 Firmware Activation Notices: Not Supported 00:26:13.851 ANA Change Notices: Supported 00:26:13.851 PLE Aggregate Log Change Notices: Not Supported 00:26:13.851 LBA Status Info Alert Notices: Not Supported 00:26:13.851 EGE Aggregate Log Change Notices: Not Supported 00:26:13.851 Normal NVM Subsystem Shutdown event: Not Supported 00:26:13.851 Zone Descriptor Change Notices: Not Supported 00:26:13.851 Discovery Log Change Notices: Not Supported 00:26:13.851 Controller Attributes 00:26:13.851 128-bit Host Identifier: Supported 00:26:13.851 Non-Operational Permissive Mode: Not Supported 00:26:13.851 NVM Sets: Not Supported 00:26:13.851 Read Recovery Levels: Not Supported 00:26:13.851 Endurance Groups: Not Supported 00:26:13.851 Predictable Latency Mode: Not Supported 00:26:13.851 Traffic Based Keep ALive: Supported 00:26:13.851 Namespace Granularity: Not Supported 00:26:13.851 SQ Associations: Not Supported 00:26:13.851 UUID List: Not Supported 00:26:13.851 Multi-Domain Subsystem: Not Supported 00:26:13.851 Fixed Capacity Management: Not Supported 00:26:13.851 Variable Capacity Management: Not Supported 00:26:13.851 Delete Endurance Group: Not Supported 00:26:13.851 Delete NVM Set: Not Supported 00:26:13.851 Extended LBA Formats Supported: Not Supported 00:26:13.851 Flexible Data Placement Supported: Not Supported 00:26:13.851 00:26:13.851 Controller Memory Buffer Support 00:26:13.851 ================================ 00:26:13.851 Supported: No 00:26:13.851 00:26:13.851 Persistent Memory Region Support 00:26:13.851 ================================ 00:26:13.851 Supported: No 00:26:13.851 00:26:13.851 Admin Command Set Attributes 00:26:13.852 ============================ 00:26:13.852 Security Send/Receive: Not Supported 00:26:13.852 Format NVM: Not Supported 00:26:13.852 Firmware Activate/Download: Not Supported 00:26:13.852 Namespace Management: Not Supported 00:26:13.852 Device Self-Test: Not Supported 00:26:13.852 Directives: Not Supported 00:26:13.852 NVMe-MI: Not Supported 00:26:13.852 Virtualization Management: Not Supported 00:26:13.852 Doorbell Buffer Config: Not Supported 00:26:13.852 Get LBA Status Capability: Not Supported 00:26:13.852 Command & Feature Lockdown Capability: Not Supported 00:26:13.852 Abort Command Limit: 4 00:26:13.852 Async Event Request Limit: 4 00:26:13.852 Number of Firmware Slots: N/A 00:26:13.852 Firmware Slot 1 Read-Only: N/A 00:26:13.852 Firmware Activation Without Reset: N/A 00:26:13.852 Multiple Update Detection Support: N/A 00:26:13.852 Firmware Update Granularity: No Information Provided 00:26:13.852 Per-Namespace SMART Log: Yes 00:26:13.852 Asymmetric Namespace Access Log Page: Supported 00:26:13.852 ANA Transition Time : 10 sec 00:26:13.852 00:26:13.852 Asymmetric Namespace Access Capabilities 00:26:13.852 ANA Optimized State : Supported 00:26:13.852 ANA Non-Optimized State : Supported 00:26:13.852 ANA Inaccessible State : Supported 00:26:13.852 ANA Persistent Loss State : Supported 00:26:13.852 ANA Change State : Supported 00:26:13.852 ANAGRPID is not changed : No 00:26:13.852 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:13.852 00:26:13.852 ANA Group Identifier Maximum : 128 00:26:13.852 Number of ANA Group Identifiers : 128 00:26:13.852 Max Number of Allowed Namespaces : 1024 00:26:13.852 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:13.852 Command Effects Log Page: Supported 00:26:13.852 Get Log Page Extended Data: Supported 00:26:13.852 Telemetry Log Pages: Not Supported 00:26:13.852 Persistent Event Log Pages: Not Supported 00:26:13.852 Supported Log Pages Log Page: May Support 00:26:13.852 Commands Supported & Effects Log Page: Not Supported 00:26:13.852 Feature Identifiers & Effects Log Page:May Support 00:26:13.852 NVMe-MI Commands & Effects Log Page: May Support 00:26:13.852 Data Area 4 for Telemetry Log: Not Supported 00:26:13.852 Error Log Page Entries Supported: 128 00:26:13.852 Keep Alive: Supported 00:26:13.852 Keep Alive Granularity: 1000 ms 00:26:13.852 00:26:13.852 NVM Command Set Attributes 00:26:13.852 ========================== 00:26:13.852 Submission Queue Entry Size 00:26:13.852 Max: 64 00:26:13.852 Min: 64 00:26:13.852 Completion Queue Entry Size 00:26:13.852 Max: 16 00:26:13.852 Min: 16 00:26:13.852 Number of Namespaces: 1024 00:26:13.852 Compare Command: Not Supported 00:26:13.852 Write Uncorrectable Command: Not Supported 00:26:13.852 Dataset Management Command: Supported 00:26:13.852 Write Zeroes Command: Supported 00:26:13.852 Set Features Save Field: Not Supported 00:26:13.852 Reservations: Not Supported 00:26:13.852 Timestamp: Not Supported 00:26:13.852 Copy: Not Supported 00:26:13.852 Volatile Write Cache: Present 00:26:13.852 Atomic Write Unit (Normal): 1 00:26:13.852 Atomic Write Unit (PFail): 1 00:26:13.852 Atomic Compare & Write Unit: 1 00:26:13.852 Fused Compare & Write: Not Supported 00:26:13.852 Scatter-Gather List 00:26:13.852 SGL Command Set: Supported 00:26:13.852 SGL Keyed: Not Supported 00:26:13.852 SGL Bit Bucket Descriptor: Not Supported 00:26:13.852 SGL Metadata Pointer: Not Supported 00:26:13.852 Oversized SGL: Not Supported 00:26:13.852 SGL Metadata Address: Not Supported 00:26:13.852 SGL Offset: Supported 00:26:13.852 Transport SGL Data Block: Not Supported 00:26:13.852 Replay Protected Memory Block: Not Supported 00:26:13.852 00:26:13.852 Firmware Slot Information 00:26:13.852 ========================= 00:26:13.852 Active slot: 0 00:26:13.852 00:26:13.852 Asymmetric Namespace Access 00:26:13.852 =========================== 00:26:13.852 Change Count : 0 00:26:13.852 Number of ANA Group Descriptors : 1 00:26:13.852 ANA Group Descriptor : 0 00:26:13.852 ANA Group ID : 1 00:26:13.852 Number of NSID Values : 1 00:26:13.852 Change Count : 0 00:26:13.852 ANA State : 1 00:26:13.852 Namespace Identifier : 1 00:26:13.852 00:26:13.852 Commands Supported and Effects 00:26:13.852 ============================== 00:26:13.852 Admin Commands 00:26:13.852 -------------- 00:26:13.852 Get Log Page (02h): Supported 00:26:13.852 Identify (06h): Supported 00:26:13.852 Abort (08h): Supported 00:26:13.852 Set Features (09h): Supported 00:26:13.852 Get Features (0Ah): Supported 00:26:13.852 Asynchronous Event Request (0Ch): Supported 00:26:13.852 Keep Alive (18h): Supported 00:26:13.852 I/O Commands 00:26:13.852 ------------ 00:26:13.852 Flush (00h): Supported 00:26:13.852 Write (01h): Supported LBA-Change 00:26:13.852 Read (02h): Supported 00:26:13.852 Write Zeroes (08h): Supported LBA-Change 00:26:13.852 Dataset Management (09h): Supported 00:26:13.852 00:26:13.852 Error Log 00:26:13.852 ========= 00:26:13.852 Entry: 0 00:26:13.852 Error Count: 0x3 00:26:13.852 Submission Queue Id: 0x0 00:26:13.852 Command Id: 0x5 00:26:13.852 Phase Bit: 0 00:26:13.852 Status Code: 0x2 00:26:13.852 Status Code Type: 0x0 00:26:13.852 Do Not Retry: 1 00:26:13.852 Error Location: 0x28 00:26:13.852 LBA: 0x0 00:26:13.852 Namespace: 0x0 00:26:13.852 Vendor Log Page: 0x0 00:26:13.852 ----------- 00:26:13.852 Entry: 1 00:26:13.852 Error Count: 0x2 00:26:13.852 Submission Queue Id: 0x0 00:26:13.852 Command Id: 0x5 00:26:13.852 Phase Bit: 0 00:26:13.852 Status Code: 0x2 00:26:13.852 Status Code Type: 0x0 00:26:13.852 Do Not Retry: 1 00:26:13.852 Error Location: 0x28 00:26:13.852 LBA: 0x0 00:26:13.852 Namespace: 0x0 00:26:13.852 Vendor Log Page: 0x0 00:26:13.852 ----------- 00:26:13.852 Entry: 2 00:26:13.852 Error Count: 0x1 00:26:13.852 Submission Queue Id: 0x0 00:26:13.852 Command Id: 0x4 00:26:13.852 Phase Bit: 0 00:26:13.852 Status Code: 0x2 00:26:13.852 Status Code Type: 0x0 00:26:13.852 Do Not Retry: 1 00:26:13.852 Error Location: 0x28 00:26:13.852 LBA: 0x0 00:26:13.852 Namespace: 0x0 00:26:13.852 Vendor Log Page: 0x0 00:26:13.852 00:26:13.852 Number of Queues 00:26:13.852 ================ 00:26:13.852 Number of I/O Submission Queues: 128 00:26:13.852 Number of I/O Completion Queues: 128 00:26:13.852 00:26:13.852 ZNS Specific Controller Data 00:26:13.852 ============================ 00:26:13.852 Zone Append Size Limit: 0 00:26:13.852 00:26:13.852 00:26:13.852 Active Namespaces 00:26:13.852 ================= 00:26:13.852 get_feature(0x05) failed 00:26:13.852 Namespace ID:1 00:26:13.852 Command Set Identifier: NVM (00h) 00:26:13.852 Deallocate: Supported 00:26:13.852 Deallocated/Unwritten Error: Not Supported 00:26:13.852 Deallocated Read Value: Unknown 00:26:13.852 Deallocate in Write Zeroes: Not Supported 00:26:13.852 Deallocated Guard Field: 0xFFFF 00:26:13.852 Flush: Supported 00:26:13.852 Reservation: Not Supported 00:26:13.852 Namespace Sharing Capabilities: Multiple Controllers 00:26:13.852 Size (in LBAs): 3125627568 (1490GiB) 00:26:13.852 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:13.852 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:13.852 UUID: 48786ebe-d559-46c2-aa40-5033304b1b89 00:26:13.852 Thin Provisioning: Not Supported 00:26:13.852 Per-NS Atomic Units: Yes 00:26:13.852 Atomic Boundary Size (Normal): 0 00:26:13.852 Atomic Boundary Size (PFail): 0 00:26:13.852 Atomic Boundary Offset: 0 00:26:13.852 NGUID/EUI64 Never Reused: No 00:26:13.852 ANA group ID: 1 00:26:13.852 Namespace Write Protected: No 00:26:13.852 Number of LBA Formats: 1 00:26:13.852 Current LBA Format: LBA Format #00 00:26:13.852 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:13.852 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:13.852 rmmod nvme_tcp 00:26:13.852 rmmod nvme_fabrics 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:13.852 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.853 16:51:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.757 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:15.757 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:15.757 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:15.757 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:16.016 16:51:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:19.309 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:19.309 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:20.246 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:20.246 00:26:20.246 real 0m17.175s 00:26:20.246 user 0m4.320s 00:26:20.246 sys 0m8.772s 00:26:20.246 16:51:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:20.246 16:51:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:20.246 ************************************ 00:26:20.246 END TEST nvmf_identify_kernel_target 00:26:20.246 ************************************ 00:26:20.246 16:51:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:20.246 16:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:20.246 16:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:20.246 16:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.507 ************************************ 00:26:20.507 START TEST nvmf_auth_host 00:26:20.507 ************************************ 00:26:20.507 16:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:20.507 * Looking for test storage... 00:26:20.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:20.507 16:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:20.507 16:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:20.507 16:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.507 --rc genhtml_branch_coverage=1 00:26:20.507 --rc genhtml_function_coverage=1 00:26:20.507 --rc genhtml_legend=1 00:26:20.507 --rc geninfo_all_blocks=1 00:26:20.507 --rc geninfo_unexecuted_blocks=1 00:26:20.507 00:26:20.507 ' 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.507 --rc genhtml_branch_coverage=1 00:26:20.507 --rc genhtml_function_coverage=1 00:26:20.507 --rc genhtml_legend=1 00:26:20.507 --rc geninfo_all_blocks=1 00:26:20.507 --rc geninfo_unexecuted_blocks=1 00:26:20.507 00:26:20.507 ' 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.507 --rc genhtml_branch_coverage=1 00:26:20.507 --rc genhtml_function_coverage=1 00:26:20.507 --rc genhtml_legend=1 00:26:20.507 --rc geninfo_all_blocks=1 00:26:20.507 --rc geninfo_unexecuted_blocks=1 00:26:20.507 00:26:20.507 ' 00:26:20.507 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:20.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.507 --rc genhtml_branch_coverage=1 00:26:20.508 --rc genhtml_function_coverage=1 00:26:20.508 --rc genhtml_legend=1 00:26:20.508 --rc geninfo_all_blocks=1 00:26:20.508 --rc geninfo_unexecuted_blocks=1 00:26:20.508 00:26:20.508 ' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.508 16:51:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.075 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.075 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.075 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.075 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.076 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.076 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:26:27.076 00:26:27.076 --- 10.0.0.2 ping statistics --- 00:26:27.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.076 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:26:27.076 00:26:27.076 --- 10.0.0.1 ping statistics --- 00:26:27.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.076 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:27.076 16:51:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:27.076 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:27.076 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:27.076 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:27.076 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=671036 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 671036 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 671036 ']' 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6c9ed3b637d3d06cc109bc1ff2c807fb 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.rYn 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6c9ed3b637d3d06cc109bc1ff2c807fb 0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6c9ed3b637d3d06cc109bc1ff2c807fb 0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6c9ed3b637d3d06cc109bc1ff2c807fb 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.rYn 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.rYn 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rYn 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3d5df31344c7e14011bce8a6b16786720b6fe9b7aa5550b3a02b6bc8e585a44b 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.eGN 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3d5df31344c7e14011bce8a6b16786720b6fe9b7aa5550b3a02b6bc8e585a44b 3 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3d5df31344c7e14011bce8a6b16786720b6fe9b7aa5550b3a02b6bc8e585a44b 3 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3d5df31344c7e14011bce8a6b16786720b6fe9b7aa5550b3a02b6bc8e585a44b 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.eGN 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.eGN 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.eGN 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=32d21f36c25d618ebeb950eeffcc6654210aec8c64167705 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.bhF 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 32d21f36c25d618ebeb950eeffcc6654210aec8c64167705 0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 32d21f36c25d618ebeb950eeffcc6654210aec8c64167705 0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=32d21f36c25d618ebeb950eeffcc6654210aec8c64167705 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.bhF 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.bhF 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.bhF 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2f3698aa89f98acd060a435b425921d952a450e8dfbe082b 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.S8F 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2f3698aa89f98acd060a435b425921d952a450e8dfbe082b 2 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2f3698aa89f98acd060a435b425921d952a450e8dfbe082b 2 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2f3698aa89f98acd060a435b425921d952a450e8dfbe082b 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.S8F 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.S8F 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.S8F 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=aefe8642cc0caf6a6f4d4449a681900e 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.D4K 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key aefe8642cc0caf6a6f4d4449a681900e 1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 aefe8642cc0caf6a6f4d4449a681900e 1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=aefe8642cc0caf6a6f4d4449a681900e 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.D4K 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.D4K 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.D4K 00:26:27.077 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3e7c329d05f3c4d974bae954704c1451 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.0Tp 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3e7c329d05f3c4d974bae954704c1451 1 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3e7c329d05f3c4d974bae954704c1451 1 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3e7c329d05f3c4d974bae954704c1451 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.0Tp 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.0Tp 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.0Tp 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5635d9d137197a3896380ac5caa4b1f41f0ebb623e15de2c 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.3Bx 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5635d9d137197a3896380ac5caa4b1f41f0ebb623e15de2c 2 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5635d9d137197a3896380ac5caa4b1f41f0ebb623e15de2c 2 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5635d9d137197a3896380ac5caa4b1f41f0ebb623e15de2c 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:27.078 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.3Bx 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.3Bx 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3Bx 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f871d190a2a4c6e7895abf65396b0d71 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Xr3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f871d190a2a4c6e7895abf65396b0d71 0 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f871d190a2a4c6e7895abf65396b0d71 0 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f871d190a2a4c6e7895abf65396b0d71 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Xr3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Xr3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Xr3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=de1973b6ddc1d50eb277263592b6d1b97c6a503695d3bb3a2a1eebec23049104 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.tCM 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key de1973b6ddc1d50eb277263592b6d1b97c6a503695d3bb3a2a1eebec23049104 3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 de1973b6ddc1d50eb277263592b6d1b97c6a503695d3bb3a2a1eebec23049104 3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=de1973b6ddc1d50eb277263592b6d1b97c6a503695d3bb3a2a1eebec23049104 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.tCM 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.tCM 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tCM 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 671036 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 671036 ']' 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.337 16:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rYn 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.eGN ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eGN 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.bhF 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.S8F ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.S8F 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.D4K 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.0Tp ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.0Tp 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3Bx 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Xr3 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Xr3 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tCM 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:27.596 16:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:30.128 Waiting for block devices as requested 00:26:30.386 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:30.386 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:30.386 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:30.645 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:30.645 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:30.645 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:30.645 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:30.903 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:30.903 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:30.903 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:31.163 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:31.163 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:31.163 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:31.163 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:31.421 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:31.421 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:31.421 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:31.989 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:32.249 No valid GPT data, bailing 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:32.249 00:26:32.249 Discovery Log Number of Records 2, Generation counter 2 00:26:32.249 =====Discovery Log Entry 0====== 00:26:32.249 trtype: tcp 00:26:32.249 adrfam: ipv4 00:26:32.249 subtype: current discovery subsystem 00:26:32.249 treq: not specified, sq flow control disable supported 00:26:32.249 portid: 1 00:26:32.249 trsvcid: 4420 00:26:32.249 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:32.249 traddr: 10.0.0.1 00:26:32.249 eflags: none 00:26:32.249 sectype: none 00:26:32.249 =====Discovery Log Entry 1====== 00:26:32.249 trtype: tcp 00:26:32.249 adrfam: ipv4 00:26:32.249 subtype: nvme subsystem 00:26:32.249 treq: not specified, sq flow control disable supported 00:26:32.249 portid: 1 00:26:32.249 trsvcid: 4420 00:26:32.249 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:32.249 traddr: 10.0.0.1 00:26:32.249 eflags: none 00:26:32.249 sectype: none 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:32.249 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.250 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.509 nvme0n1 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:32.509 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.510 16:51:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.510 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.769 nvme0n1 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.769 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.770 nvme0n1 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.770 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.029 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 nvme0n1 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 nvme0n1 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.289 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.290 16:51:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.550 nvme0n1 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.550 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.809 nvme0n1 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.809 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.810 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.068 nvme0n1 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.068 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.356 nvme0n1 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.356 16:51:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.683 nvme0n1 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.683 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.942 nvme0n1 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.942 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.943 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.943 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.201 nvme0n1 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:35.201 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.202 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.540 nvme0n1 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.540 16:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.540 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.800 nvme0n1 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.800 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 nvme0n1 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.059 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.318 nvme0n1 00:26:36.318 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.318 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.318 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.318 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.318 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.318 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.577 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.578 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:36.578 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.578 16:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.578 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.837 nvme0n1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.837 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.406 nvme0n1 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.406 16:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.665 nvme0n1 00:26:37.665 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.665 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.665 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.665 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.665 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.665 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.923 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 nvme0n1 00:26:38.182 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.182 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.182 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.183 16:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 nvme0n1 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.751 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.752 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.319 nvme0n1 00:26:39.319 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.319 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.319 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.320 16:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.887 nvme0n1 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.887 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.145 16:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.713 nvme0n1 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.713 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 nvme0n1 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.282 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.283 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.283 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.283 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.283 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.283 16:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.850 nvme0n1 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:41.850 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.851 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.110 nvme0n1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.110 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.370 nvme0n1 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.370 16:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.630 nvme0n1 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.630 nvme0n1 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.630 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.890 nvme0n1 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.890 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.150 nvme0n1 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.150 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.409 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.410 nvme0n1 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.410 16:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.410 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.669 nvme0n1 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.669 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.928 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 nvme0n1 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.929 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 nvme0n1 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.188 16:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.447 nvme0n1 00:26:44.447 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.447 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.447 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.447 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.447 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.447 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.706 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 nvme0n1 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.965 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.966 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.225 nvme0n1 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.225 16:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.484 nvme0n1 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.484 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.485 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.743 nvme0n1 00:26:45.743 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.743 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.743 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.743 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.743 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.743 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.003 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 nvme0n1 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.262 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.521 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.522 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.522 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.522 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.522 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.522 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.522 16:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.781 nvme0n1 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.781 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.782 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.350 nvme0n1 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.350 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.351 16:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.610 nvme0n1 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.610 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.178 nvme0n1 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.178 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.179 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.179 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.179 16:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.745 nvme0n1 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.745 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.746 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.312 nvme0n1 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.312 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.571 16:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.139 nvme0n1 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.139 16:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.707 nvme0n1 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.707 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.275 nvme0n1 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.275 16:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.534 nvme0n1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.534 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.793 nvme0n1 00:26:51.793 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.793 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.793 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.793 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.794 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.053 nvme0n1 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.053 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.054 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.312 nvme0n1 00:26:52.312 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.312 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.312 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.312 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.312 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.313 nvme0n1 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.313 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.572 16:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.572 nvme0n1 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.572 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.831 nvme0n1 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.831 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.090 nvme0n1 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.090 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.350 nvme0n1 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.350 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.610 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.610 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.610 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.610 16:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.610 nvme0n1 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.610 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:53.869 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:53.870 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.870 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.870 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.129 nvme0n1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.129 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.388 nvme0n1 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.388 16:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.647 nvme0n1 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.647 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.905 nvme0n1 00:26:54.905 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.905 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.905 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.905 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.905 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.905 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.164 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.165 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.424 nvme0n1 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.424 16:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.683 nvme0n1 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.683 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.942 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.202 nvme0n1 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.202 16:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.770 nvme0n1 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.770 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.029 nvme0n1 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.029 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.287 16:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.546 nvme0n1 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5ZWQzYjYzN2QzZDA2Y2MxMDliYzFmZjJjODA3ZmJJnRI8: 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2Q1ZGYzMTM0NGM3ZTE0MDExYmNlOGE2YjE2Nzg2NzIwYjZmZTliN2FhNTU1MGIzYTAyYjZiYzhlNTg1YTQ0Yt39gfw=: 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.546 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.113 nvme0n1 00:26:58.113 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.113 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.113 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.113 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.113 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.113 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.372 16:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.940 nvme0n1 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.940 16:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.509 nvme0n1 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTYzNWQ5ZDEzNzE5N2EzODk2MzgwYWM1Y2FhNGIxZjQxZjBlYmI2MjNlMTVkZTJjTV7jeQ==: 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjg3MWQxOTBhMmE0YzZlNzg5NWFiZjY1Mzk2YjBkNzHDIcfm: 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.509 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.076 nvme0n1 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.076 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGUxOTczYjZkZGMxZDUwZWIyNzcyNjM1OTJiNmQxYjk3YzZhNTAzNjk1ZDNiYjNhMmExZWViZWMyMzA0OTEwNC7CY4Y=: 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.379 16:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.945 nvme0n1 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.945 request: 00:27:00.945 { 00:27:00.945 "name": "nvme0", 00:27:00.945 "trtype": "tcp", 00:27:00.945 "traddr": "10.0.0.1", 00:27:00.945 "adrfam": "ipv4", 00:27:00.945 "trsvcid": "4420", 00:27:00.945 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:00.945 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:00.945 "prchk_reftag": false, 00:27:00.945 "prchk_guard": false, 00:27:00.945 "hdgst": false, 00:27:00.945 "ddgst": false, 00:27:00.945 "allow_unrecognized_csi": false, 00:27:00.945 "method": "bdev_nvme_attach_controller", 00:27:00.945 "req_id": 1 00:27:00.945 } 00:27:00.945 Got JSON-RPC error response 00:27:00.945 response: 00:27:00.945 { 00:27:00.945 "code": -5, 00:27:00.945 "message": "Input/output error" 00:27:00.945 } 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.945 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.204 request: 00:27:01.204 { 00:27:01.204 "name": "nvme0", 00:27:01.204 "trtype": "tcp", 00:27:01.204 "traddr": "10.0.0.1", 00:27:01.204 "adrfam": "ipv4", 00:27:01.204 "trsvcid": "4420", 00:27:01.204 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:01.204 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:01.204 "prchk_reftag": false, 00:27:01.204 "prchk_guard": false, 00:27:01.204 "hdgst": false, 00:27:01.204 "ddgst": false, 00:27:01.204 "dhchap_key": "key2", 00:27:01.204 "allow_unrecognized_csi": false, 00:27:01.204 "method": "bdev_nvme_attach_controller", 00:27:01.204 "req_id": 1 00:27:01.204 } 00:27:01.204 Got JSON-RPC error response 00:27:01.204 response: 00:27:01.204 { 00:27:01.204 "code": -5, 00:27:01.204 "message": "Input/output error" 00:27:01.204 } 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.204 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.204 request: 00:27:01.204 { 00:27:01.204 "name": "nvme0", 00:27:01.204 "trtype": "tcp", 00:27:01.204 "traddr": "10.0.0.1", 00:27:01.204 "adrfam": "ipv4", 00:27:01.204 "trsvcid": "4420", 00:27:01.204 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:01.204 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:01.204 "prchk_reftag": false, 00:27:01.204 "prchk_guard": false, 00:27:01.205 "hdgst": false, 00:27:01.205 "ddgst": false, 00:27:01.205 "dhchap_key": "key1", 00:27:01.205 "dhchap_ctrlr_key": "ckey2", 00:27:01.205 "allow_unrecognized_csi": false, 00:27:01.205 "method": "bdev_nvme_attach_controller", 00:27:01.205 "req_id": 1 00:27:01.205 } 00:27:01.205 Got JSON-RPC error response 00:27:01.205 response: 00:27:01.205 { 00:27:01.205 "code": -5, 00:27:01.205 "message": "Input/output error" 00:27:01.205 } 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.205 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.463 nvme0n1 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:27:01.463 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.464 16:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.464 request: 00:27:01.464 { 00:27:01.464 "name": "nvme0", 00:27:01.464 "dhchap_key": "key1", 00:27:01.464 "dhchap_ctrlr_key": "ckey2", 00:27:01.464 "method": "bdev_nvme_set_keys", 00:27:01.464 "req_id": 1 00:27:01.464 } 00:27:01.464 Got JSON-RPC error response 00:27:01.464 response: 00:27:01.464 { 00:27:01.464 "code": -13, 00:27:01.464 "message": "Permission denied" 00:27:01.464 } 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:01.464 16:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:02.838 16:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzJkMjFmMzZjMjVkNjE4ZWJlYjk1MGVlZmZjYzY2NTQyMTBhZWM4YzY0MTY3NzA19ItBOQ==: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmYzNjk4YWE4OWY5OGFjZDA2MGE0MzViNDI1OTIxZDk1MmE0NTBlOGRmYmUwODJiRl1QEw==: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.773 nvme0n1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWVmZTg2NDJjYzBjYWY2YTZmNGQ0NDQ5YTY4MTkwMGXeUwCw: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2U3YzMyOWQwNWYzYzRkOTc0YmFlOTU0NzA0YzE0NTErl+3u: 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.773 request: 00:27:03.773 { 00:27:03.773 "name": "nvme0", 00:27:03.773 "dhchap_key": "key2", 00:27:03.773 "dhchap_ctrlr_key": "ckey1", 00:27:03.773 "method": "bdev_nvme_set_keys", 00:27:03.773 "req_id": 1 00:27:03.773 } 00:27:03.773 Got JSON-RPC error response 00:27:03.773 response: 00:27:03.773 { 00:27:03.773 "code": -13, 00:27:03.773 "message": "Permission denied" 00:27:03.773 } 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:03.773 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.774 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.032 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.032 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:04.032 16:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.967 rmmod nvme_tcp 00:27:04.967 rmmod nvme_fabrics 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 671036 ']' 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 671036 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 671036 ']' 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 671036 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 671036 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 671036' 00:27:04.967 killing process with pid 671036 00:27:04.967 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 671036 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 671036 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.227 16:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:07.763 16:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:10.299 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:10.299 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:10.300 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:11.678 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:11.678 16:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rYn /tmp/spdk.key-null.bhF /tmp/spdk.key-sha256.D4K /tmp/spdk.key-sha384.3Bx /tmp/spdk.key-sha512.tCM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:11.678 16:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:14.969 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:14.969 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:14.969 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:14.969 00:27:14.969 real 0m54.251s 00:27:14.969 user 0m48.271s 00:27:14.969 sys 0m12.810s 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.969 ************************************ 00:27:14.969 END TEST nvmf_auth_host 00:27:14.969 ************************************ 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.969 ************************************ 00:27:14.969 START TEST nvmf_digest 00:27:14.969 ************************************ 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:14.969 * Looking for test storage... 00:27:14.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.969 --rc genhtml_branch_coverage=1 00:27:14.969 --rc genhtml_function_coverage=1 00:27:14.969 --rc genhtml_legend=1 00:27:14.969 --rc geninfo_all_blocks=1 00:27:14.969 --rc geninfo_unexecuted_blocks=1 00:27:14.969 00:27:14.969 ' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.969 --rc genhtml_branch_coverage=1 00:27:14.969 --rc genhtml_function_coverage=1 00:27:14.969 --rc genhtml_legend=1 00:27:14.969 --rc geninfo_all_blocks=1 00:27:14.969 --rc geninfo_unexecuted_blocks=1 00:27:14.969 00:27:14.969 ' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.969 --rc genhtml_branch_coverage=1 00:27:14.969 --rc genhtml_function_coverage=1 00:27:14.969 --rc genhtml_legend=1 00:27:14.969 --rc geninfo_all_blocks=1 00:27:14.969 --rc geninfo_unexecuted_blocks=1 00:27:14.969 00:27:14.969 ' 00:27:14.969 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.969 --rc genhtml_branch_coverage=1 00:27:14.970 --rc genhtml_function_coverage=1 00:27:14.970 --rc genhtml_legend=1 00:27:14.970 --rc geninfo_all_blocks=1 00:27:14.970 --rc geninfo_unexecuted_blocks=1 00:27:14.970 00:27:14.970 ' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.970 16:52:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:21.541 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:21.541 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:21.541 Found net devices under 0000:86:00.0: cvl_0_0 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:21.541 Found net devices under 0000:86:00.1: cvl_0_1 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.541 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:21.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:27:21.542 00:27:21.542 --- 10.0.0.2 ping statistics --- 00:27:21.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.542 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:27:21.542 00:27:21.542 --- 10.0.0.1 ping statistics --- 00:27:21.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.542 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.542 ************************************ 00:27:21.542 START TEST nvmf_digest_clean 00:27:21.542 ************************************ 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=684798 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 684798 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 684798 ']' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.542 [2024-10-14 16:52:25.455885] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:21.542 [2024-10-14 16:52:25.455934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.542 [2024-10-14 16:52:25.527901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.542 [2024-10-14 16:52:25.568935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.542 [2024-10-14 16:52:25.568970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.542 [2024-10-14 16:52:25.568977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.542 [2024-10-14 16:52:25.568982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.542 [2024-10-14 16:52:25.568988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.542 [2024-10-14 16:52:25.569508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.542 null0 00:27:21.542 [2024-10-14 16:52:25.727282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.542 [2024-10-14 16:52:25.751460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=684822 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 684822 /var/tmp/bperf.sock 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 684822 ']' 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:21.542 [2024-10-14 16:52:25.803771] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:21.542 [2024-10-14 16:52:25.803813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid684822 ] 00:27:21.542 [2024-10-14 16:52:25.871224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.542 [2024-10-14 16:52:25.912963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:21.542 16:52:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:21.801 16:52:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.801 16:52:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.801 nvme0n1 00:27:22.059 16:52:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:22.059 16:52:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.059 Running I/O for 2 seconds... 00:27:23.933 25210.00 IOPS, 98.48 MiB/s [2024-10-14T14:52:28.567Z] 25731.50 IOPS, 100.51 MiB/s 00:27:23.933 Latency(us) 00:27:23.933 [2024-10-14T14:52:28.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.933 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:23.933 nvme0n1 : 2.00 25752.07 100.59 0.00 0.00 4965.89 2293.76 11234.74 00:27:23.933 [2024-10-14T14:52:28.567Z] =================================================================================================================== 00:27:23.933 [2024-10-14T14:52:28.567Z] Total : 25752.07 100.59 0.00 0.00 4965.89 2293.76 11234.74 00:27:23.933 { 00:27:23.933 "results": [ 00:27:23.933 { 00:27:23.933 "job": "nvme0n1", 00:27:23.933 "core_mask": "0x2", 00:27:23.933 "workload": "randread", 00:27:23.933 "status": "finished", 00:27:23.933 "queue_depth": 128, 00:27:23.933 "io_size": 4096, 00:27:23.933 "runtime": 2.003373, 00:27:23.933 "iops": 25752.069135403144, 00:27:23.933 "mibps": 100.59402006016853, 00:27:23.933 "io_failed": 0, 00:27:23.933 "io_timeout": 0, 00:27:23.933 "avg_latency_us": 4965.890831235792, 00:27:23.933 "min_latency_us": 2293.76, 00:27:23.933 "max_latency_us": 11234.742857142857 00:27:23.933 } 00:27:23.933 ], 00:27:23.933 "core_count": 1 00:27:23.933 } 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:24.192 | select(.opcode=="crc32c") 00:27:24.192 | "\(.module_name) \(.executed)"' 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 684822 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 684822 ']' 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 684822 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 684822 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 684822' 00:27:24.192 killing process with pid 684822 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 684822 00:27:24.192 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.192 00:27:24.192 Latency(us) 00:27:24.192 [2024-10-14T14:52:28.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.192 [2024-10-14T14:52:28.826Z] =================================================================================================================== 00:27:24.192 [2024-10-14T14:52:28.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.192 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 684822 00:27:24.450 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:24.450 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:24.450 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:24.450 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:24.450 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:24.450 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=685293 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 685293 /var/tmp/bperf.sock 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 685293 ']' 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.451 16:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:24.451 [2024-10-14 16:52:29.024352] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:24.451 [2024-10-14 16:52:29.024402] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685293 ] 00:27:24.451 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.451 Zero copy mechanism will not be used. 00:27:24.709 [2024-10-14 16:52:29.094728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.709 [2024-10-14 16:52:29.132316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.709 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.709 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:24.709 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:24.709 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:24.709 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:24.968 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:24.968 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.226 nvme0n1 00:27:25.226 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:25.226 16:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:25.485 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:25.485 Zero copy mechanism will not be used. 00:27:25.485 Running I/O for 2 seconds... 00:27:27.363 6094.00 IOPS, 761.75 MiB/s [2024-10-14T14:52:31.997Z] 5949.00 IOPS, 743.62 MiB/s 00:27:27.363 Latency(us) 00:27:27.363 [2024-10-14T14:52:31.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.363 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:27.363 nvme0n1 : 2.00 5947.94 743.49 0.00 0.00 2687.42 616.35 5118.05 00:27:27.363 [2024-10-14T14:52:31.997Z] =================================================================================================================== 00:27:27.363 [2024-10-14T14:52:31.997Z] Total : 5947.94 743.49 0.00 0.00 2687.42 616.35 5118.05 00:27:27.363 { 00:27:27.363 "results": [ 00:27:27.363 { 00:27:27.363 "job": "nvme0n1", 00:27:27.363 "core_mask": "0x2", 00:27:27.363 "workload": "randread", 00:27:27.363 "status": "finished", 00:27:27.363 "queue_depth": 16, 00:27:27.363 "io_size": 131072, 00:27:27.363 "runtime": 2.003048, 00:27:27.363 "iops": 5947.935346531885, 00:27:27.363 "mibps": 743.4919183164857, 00:27:27.363 "io_failed": 0, 00:27:27.363 "io_timeout": 0, 00:27:27.363 "avg_latency_us": 2687.420578910765, 00:27:27.363 "min_latency_us": 616.3504761904762, 00:27:27.363 "max_latency_us": 5118.049523809524 00:27:27.363 } 00:27:27.363 ], 00:27:27.363 "core_count": 1 00:27:27.363 } 00:27:27.363 16:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:27.363 16:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:27.363 16:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:27.363 16:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:27.363 | select(.opcode=="crc32c") 00:27:27.363 | "\(.module_name) \(.executed)"' 00:27:27.363 16:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 685293 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 685293 ']' 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 685293 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 685293 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 685293' 00:27:27.622 killing process with pid 685293 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 685293 00:27:27.622 Received shutdown signal, test time was about 2.000000 seconds 00:27:27.622 00:27:27.622 Latency(us) 00:27:27.622 [2024-10-14T14:52:32.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.622 [2024-10-14T14:52:32.256Z] =================================================================================================================== 00:27:27.622 [2024-10-14T14:52:32.256Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.622 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 685293 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=685921 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 685921 /var/tmp/bperf.sock 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 685921 ']' 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:27.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:27.881 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:27.881 [2024-10-14 16:52:32.392843] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:27.881 [2024-10-14 16:52:32.392888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685921 ] 00:27:27.881 [2024-10-14 16:52:32.460279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.881 [2024-10-14 16:52:32.502277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.139 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.139 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:28.139 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:28.139 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:28.139 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:28.398 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.398 16:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.657 nvme0n1 00:27:28.657 16:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:28.657 16:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:28.657 Running I/O for 2 seconds... 00:27:30.972 28264.00 IOPS, 110.41 MiB/s [2024-10-14T14:52:35.606Z] 28415.50 IOPS, 111.00 MiB/s 00:27:30.972 Latency(us) 00:27:30.972 [2024-10-14T14:52:35.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.972 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:30.972 nvme0n1 : 2.00 28414.64 110.99 0.00 0.00 4499.05 2215.74 15166.90 00:27:30.972 [2024-10-14T14:52:35.606Z] =================================================================================================================== 00:27:30.972 [2024-10-14T14:52:35.606Z] Total : 28414.64 110.99 0.00 0.00 4499.05 2215.74 15166.90 00:27:30.972 { 00:27:30.972 "results": [ 00:27:30.972 { 00:27:30.972 "job": "nvme0n1", 00:27:30.972 "core_mask": "0x2", 00:27:30.972 "workload": "randwrite", 00:27:30.972 "status": "finished", 00:27:30.972 "queue_depth": 128, 00:27:30.972 "io_size": 4096, 00:27:30.972 "runtime": 2.004565, 00:27:30.972 "iops": 28414.643576037695, 00:27:30.972 "mibps": 110.99470146889725, 00:27:30.972 "io_failed": 0, 00:27:30.972 "io_timeout": 0, 00:27:30.972 "avg_latency_us": 4499.049284673437, 00:27:30.972 "min_latency_us": 2215.7409523809524, 00:27:30.972 "max_latency_us": 15166.902857142857 00:27:30.972 } 00:27:30.972 ], 00:27:30.972 "core_count": 1 00:27:30.972 } 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:30.972 | select(.opcode=="crc32c") 00:27:30.972 | "\(.module_name) \(.executed)"' 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 685921 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 685921 ']' 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 685921 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 685921 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 685921' 00:27:30.972 killing process with pid 685921 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 685921 00:27:30.972 Received shutdown signal, test time was about 2.000000 seconds 00:27:30.972 00:27:30.972 Latency(us) 00:27:30.972 [2024-10-14T14:52:35.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.972 [2024-10-14T14:52:35.606Z] =================================================================================================================== 00:27:30.972 [2024-10-14T14:52:35.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.972 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 685921 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=686462 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 686462 /var/tmp/bperf.sock 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 686462 ']' 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:31.231 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 [2024-10-14 16:52:35.770267] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:31.231 [2024-10-14 16:52:35.770313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686462 ] 00:27:31.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.231 Zero copy mechanism will not be used. 00:27:31.231 [2024-10-14 16:52:35.837867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.490 [2024-10-14 16:52:35.874594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.490 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:31.490 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:31.490 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:31.490 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:31.490 16:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:31.749 16:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.749 16:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.007 nvme0n1 00:27:32.007 16:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:32.007 16:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.007 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:32.007 Zero copy mechanism will not be used. 00:27:32.007 Running I/O for 2 seconds... 00:27:34.316 6529.00 IOPS, 816.12 MiB/s [2024-10-14T14:52:38.950Z] 7042.00 IOPS, 880.25 MiB/s 00:27:34.316 Latency(us) 00:27:34.316 [2024-10-14T14:52:38.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.316 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:34.316 nvme0n1 : 2.00 7038.78 879.85 0.00 0.00 2269.21 1209.30 4181.82 00:27:34.316 [2024-10-14T14:52:38.951Z] =================================================================================================================== 00:27:34.317 [2024-10-14T14:52:38.951Z] Total : 7038.78 879.85 0.00 0.00 2269.21 1209.30 4181.82 00:27:34.317 { 00:27:34.317 "results": [ 00:27:34.317 { 00:27:34.317 "job": "nvme0n1", 00:27:34.317 "core_mask": "0x2", 00:27:34.317 "workload": "randwrite", 00:27:34.317 "status": "finished", 00:27:34.317 "queue_depth": 16, 00:27:34.317 "io_size": 131072, 00:27:34.317 "runtime": 2.003189, 00:27:34.317 "iops": 7038.776670598731, 00:27:34.317 "mibps": 879.8470838248413, 00:27:34.317 "io_failed": 0, 00:27:34.317 "io_timeout": 0, 00:27:34.317 "avg_latency_us": 2269.205856399865, 00:27:34.317 "min_latency_us": 1209.2952380952381, 00:27:34.317 "max_latency_us": 4181.820952380953 00:27:34.317 } 00:27:34.317 ], 00:27:34.317 "core_count": 1 00:27:34.317 } 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:34.317 | select(.opcode=="crc32c") 00:27:34.317 | "\(.module_name) \(.executed)"' 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 686462 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 686462 ']' 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 686462 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 686462 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 686462' 00:27:34.317 killing process with pid 686462 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 686462 00:27:34.317 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.317 00:27:34.317 Latency(us) 00:27:34.317 [2024-10-14T14:52:38.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.317 [2024-10-14T14:52:38.951Z] =================================================================================================================== 00:27:34.317 [2024-10-14T14:52:38.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.317 16:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 686462 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 684798 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 684798 ']' 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 684798 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 684798 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 684798' 00:27:34.576 killing process with pid 684798 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 684798 00:27:34.576 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 684798 00:27:34.835 00:27:34.835 real 0m13.882s 00:27:34.835 user 0m26.495s 00:27:34.835 sys 0m4.649s 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:34.835 ************************************ 00:27:34.835 END TEST nvmf_digest_clean 00:27:34.835 ************************************ 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:34.835 ************************************ 00:27:34.835 START TEST nvmf_digest_error 00:27:34.835 ************************************ 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=687015 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 687015 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:34.835 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 687015 ']' 00:27:34.836 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.836 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:34.836 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.836 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:34.836 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.836 [2024-10-14 16:52:39.402510] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:34.836 [2024-10-14 16:52:39.402549] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.095 [2024-10-14 16:52:39.474516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.095 [2024-10-14 16:52:39.514725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.095 [2024-10-14 16:52:39.514761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.095 [2024-10-14 16:52:39.514769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.095 [2024-10-14 16:52:39.514775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.095 [2024-10-14 16:52:39.514780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.095 [2024-10-14 16:52:39.515337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.095 [2024-10-14 16:52:39.587780] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.095 null0 00:27:35.095 [2024-10-14 16:52:39.678514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.095 [2024-10-14 16:52:39.702703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=687194 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 687194 /var/tmp/bperf.sock 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 687194 ']' 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.095 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.354 [2024-10-14 16:52:39.753454] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:35.354 [2024-10-14 16:52:39.753500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687194 ] 00:27:35.354 [2024-10-14 16:52:39.820626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.354 [2024-10-14 16:52:39.860987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.354 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.354 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:35.354 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.354 16:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.613 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:35.613 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.613 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.613 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.613 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.613 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.872 nvme0n1 00:27:35.872 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:35.872 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.872 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:35.872 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.872 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:35.872 16:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:36.131 Running I/O for 2 seconds... 00:27:36.131 [2024-10-14 16:52:40.584660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.584695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.584706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.598170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.598197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.598211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.606149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.606171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.606180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.617886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.617908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.617917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.629625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.629646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.629655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.639619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.639641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.639649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.647636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.131 [2024-10-14 16:52:40.647657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.131 [2024-10-14 16:52:40.647665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.131 [2024-10-14 16:52:40.656763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.656786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.656794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.666846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.666867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.666875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.676476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.676498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.676507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.686584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.686613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.686623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.695802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.695824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.695832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.704061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.704084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.704092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.716474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.716497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.716506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.728463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.728484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.728493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.737400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.737422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.737432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.749591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.749619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.749628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.132 [2024-10-14 16:52:40.761714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.132 [2024-10-14 16:52:40.761737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.132 [2024-10-14 16:52:40.761746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.771797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.771820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.771833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.780377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.780399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.780407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.790477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.790498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.801892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.801912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.801921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.810151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.810172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.821166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.821187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.821195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.832170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.832193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.832201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.839503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.839524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.392 [2024-10-14 16:52:40.839532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.392 [2024-10-14 16:52:40.851185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.392 [2024-10-14 16:52:40.851209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.851218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.862060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.862085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.862094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.870487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.870508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.870517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.882823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.882844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.882852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.893858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.893882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.893890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.902300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.902321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.902329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.914799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.914821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.914829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.924645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.924667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.924676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.936801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.936823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.936831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.947525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.947546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.947554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.955728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.955749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.955758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.967235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.967257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.967266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.977108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.977130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.977138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.985608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.985629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.985637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:40.995839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:40.995861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:40.995869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:41.006907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:41.006929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:41.006937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:41.016899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:41.016919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:41.016927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.393 [2024-10-14 16:52:41.025054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.393 [2024-10-14 16:52:41.025074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.393 [2024-10-14 16:52:41.025082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.036871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.036898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.036910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.045019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.045040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.045048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.057196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.057218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.057226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.066690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.066711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.066719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.075586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.075612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.075620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.085073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.085103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.093698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.093718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.093726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.103272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.103296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.103304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.113744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.113765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.113773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.121948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.121972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.121980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.133635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.133655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.133663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.141655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.141684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.153093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.153114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.653 [2024-10-14 16:52:41.153122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.653 [2024-10-14 16:52:41.165627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.653 [2024-10-14 16:52:41.165648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.165657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.176969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.176989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.176997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.184787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.184807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.184815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.196608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.196628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.196636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.205204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.205224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.205233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.217844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.217866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.217875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.226099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.226120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.226129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.237469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.237491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.237499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.249669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.249690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.249698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.261363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.261384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.261393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.272268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.272289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.272297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.654 [2024-10-14 16:52:41.280854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.654 [2024-10-14 16:52:41.280875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.654 [2024-10-14 16:52:41.280883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.291303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.291325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.291334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.302782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.302803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.302816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.311239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.311259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.311267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.320665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.320686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.320694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.329898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.329920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.329928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.341159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.341180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.341188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.350010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.350033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.350041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.359951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.359975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.359984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.369314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.914 [2024-10-14 16:52:41.369335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.914 [2024-10-14 16:52:41.369343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.914 [2024-10-14 16:52:41.377630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.377651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.377659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.387414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.387435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.387444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.398554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.398575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.398583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.408936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.408957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.408966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.418103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.418125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.418133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.427636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.427656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.427665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.435859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.435879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.435887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.445458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.445478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.445487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.454547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.454568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.454576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.463578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.463599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.463616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.473061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.473082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.473090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.482895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.482916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.482924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.491253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.491273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.491281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.503568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.503588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.503597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.516167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.527670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.527691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.527700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.536470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.536491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.536499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.915 [2024-10-14 16:52:41.547611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:36.915 [2024-10-14 16:52:41.547632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.915 [2024-10-14 16:52:41.547640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.557884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.557910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.557918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 24965.00 IOPS, 97.52 MiB/s [2024-10-14T14:52:41.810Z] [2024-10-14 16:52:41.567920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.567941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.567949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.576697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.576718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.576727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.586457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.586478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.586486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.595445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.595465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.595474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.604744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.604765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.604773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.613354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.613376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.613385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.625543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.625564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.625573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.637968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.637989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.637998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.648489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.648511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.648519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.657161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.657181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.657189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.668683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.668704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.668713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.679686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.679707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.679715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.688099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.688120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.688128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.698358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.698379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.698387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.709799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.709820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.709828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.718192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.718212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.718220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.727607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.727628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.727640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.737275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.737296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.737304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.746198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.746218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.176 [2024-10-14 16:52:41.746226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.176 [2024-10-14 16:52:41.754499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.176 [2024-10-14 16:52:41.754519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.177 [2024-10-14 16:52:41.754528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.177 [2024-10-14 16:52:41.767113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.177 [2024-10-14 16:52:41.767138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.177 [2024-10-14 16:52:41.767146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.177 [2024-10-14 16:52:41.779667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.177 [2024-10-14 16:52:41.779688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.177 [2024-10-14 16:52:41.779697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.177 [2024-10-14 16:52:41.790078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.177 [2024-10-14 16:52:41.790098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.177 [2024-10-14 16:52:41.790106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.177 [2024-10-14 16:52:41.798357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.177 [2024-10-14 16:52:41.798378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.177 [2024-10-14 16:52:41.798386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.177 [2024-10-14 16:52:41.809574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.177 [2024-10-14 16:52:41.809593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.177 [2024-10-14 16:52:41.809607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.821697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.821722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.821731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.830028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.830047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.830056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.842302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.842322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.842330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.853425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.853445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.853453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.861989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.862008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.862016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.874847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.874867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.874876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.887369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.887390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.887398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.895446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.895466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.895474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.907518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.907538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.907546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.917277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.917297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.917305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.928830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.928850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.928858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.938055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.938075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.938083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.949138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.949157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.949165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.961494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.961515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.961522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.973953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.973972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.973980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.983984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.984003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.437 [2024-10-14 16:52:41.984011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.437 [2024-10-14 16:52:41.991910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.437 [2024-10-14 16:52:41.991930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:41.991937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.003173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.003193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.003204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.013766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.013785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.013793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.025960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.025980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.025988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.034376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.034396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.034404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.044267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.044287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.044295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.054095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.054115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.054123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.438 [2024-10-14 16:52:42.063155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.438 [2024-10-14 16:52:42.063175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.438 [2024-10-14 16:52:42.063183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.072825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.072847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.072855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.082161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.082181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.082189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.090689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.090710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.090718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.100691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.100711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.100719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.110534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.110554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.110562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.119324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.119344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.119352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.129552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.129574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.129582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.698 [2024-10-14 16:52:42.141071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.698 [2024-10-14 16:52:42.141092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.698 [2024-10-14 16:52:42.141101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.150809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.150830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.150838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.162864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.162884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.162892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.170969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.170989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.171001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.182975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.182996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.183004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.195056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.195076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.195084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.205902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.205922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.205930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.214114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.214134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.214142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.224228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.224247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.224255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.235974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.235994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.236002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.245476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.245497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.245505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.255506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.255527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.255536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.263777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.263805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.263814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.274373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.274393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.274401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.282895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.282915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.282923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.295724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.295746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.295754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.306699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.306719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.306727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.315330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.315350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.315358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.699 [2024-10-14 16:52:42.326801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.699 [2024-10-14 16:52:42.326822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.699 [2024-10-14 16:52:42.326830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.337491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.337512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.337520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.346118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.346139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.346147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.355661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.355682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.355689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.364596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.364622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.364630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.373664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.373685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.373693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.383642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.383664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.383673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.392615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.392636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.392644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.402512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.402532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.402540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.412394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.412415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.412423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.420388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.420417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.432062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.432094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.441415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.441436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.441444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.449912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.449932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.449940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.459468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.459489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.459497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.468561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.468581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.468589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.477072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.477092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.477099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.487303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.487323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.487330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.496683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.496703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.496711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.505964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.505985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.505993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.514126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.514150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.514159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.525349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.525368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.960 [2024-10-14 16:52:42.525376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.960 [2024-10-14 16:52:42.537243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.960 [2024-10-14 16:52:42.537263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.961 [2024-10-14 16:52:42.537271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.961 [2024-10-14 16:52:42.549687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.961 [2024-10-14 16:52:42.549707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.961 [2024-10-14 16:52:42.549715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.961 [2024-10-14 16:52:42.558121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.961 [2024-10-14 16:52:42.558141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.961 [2024-10-14 16:52:42.558148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.961 25056.00 IOPS, 97.88 MiB/s [2024-10-14T14:52:42.595Z] [2024-10-14 16:52:42.570983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee6ab0) 00:27:37.961 [2024-10-14 16:52:42.571000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.961 [2024-10-14 16:52:42.571008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.961 00:27:37.961 Latency(us) 00:27:37.961 [2024-10-14T14:52:42.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.961 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:37.961 nvme0n1 : 2.01 25086.72 98.00 0.00 0.00 5096.92 2512.21 16727.28 00:27:37.961 [2024-10-14T14:52:42.595Z] =================================================================================================================== 00:27:37.961 [2024-10-14T14:52:42.595Z] Total : 25086.72 98.00 0.00 0.00 5096.92 2512.21 16727.28 00:27:37.961 { 00:27:37.961 "results": [ 00:27:37.961 { 00:27:37.961 "job": "nvme0n1", 00:27:37.961 "core_mask": "0x2", 00:27:37.961 "workload": "randread", 00:27:37.961 "status": "finished", 00:27:37.961 "queue_depth": 128, 00:27:37.961 "io_size": 4096, 00:27:37.961 "runtime": 2.006679, 00:27:37.961 "iops": 25086.722888912478, 00:27:37.961 "mibps": 97.99501128481437, 00:27:37.961 "io_failed": 0, 00:27:37.961 "io_timeout": 0, 00:27:37.961 "avg_latency_us": 5096.921877065083, 00:27:37.961 "min_latency_us": 2512.213333333333, 00:27:37.961 "max_latency_us": 16727.28380952381 00:27:37.961 } 00:27:37.961 ], 00:27:37.961 "core_count": 1 00:27:37.961 } 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:38.221 | .driver_specific 00:27:38.221 | .nvme_error 00:27:38.221 | .status_code 00:27:38.221 | .command_transient_transport_error' 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 197 > 0 )) 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 687194 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 687194 ']' 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 687194 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 687194 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 687194' 00:27:38.221 killing process with pid 687194 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 687194 00:27:38.221 Received shutdown signal, test time was about 2.000000 seconds 00:27:38.221 00:27:38.221 Latency(us) 00:27:38.221 [2024-10-14T14:52:42.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.221 [2024-10-14T14:52:42.855Z] =================================================================================================================== 00:27:38.221 [2024-10-14T14:52:42.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.221 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 687194 00:27:38.480 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:38.480 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:38.480 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:38.480 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:38.480 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:38.480 16:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=687674 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 687674 /var/tmp/bperf.sock 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 687674 ']' 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:38.480 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.480 [2024-10-14 16:52:43.047413] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:38.480 [2024-10-14 16:52:43.047459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687674 ] 00:27:38.480 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:38.480 Zero copy mechanism will not be used. 00:27:38.740 [2024-10-14 16:52:43.116168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.740 [2024-10-14 16:52:43.157782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.740 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.740 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:38.740 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:38.740 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.000 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:39.000 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.000 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.000 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.000 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.000 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.260 nvme0n1 00:27:39.260 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:39.260 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.260 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.260 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.260 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:39.260 16:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:39.260 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:39.260 Zero copy mechanism will not be used. 00:27:39.260 Running I/O for 2 seconds... 00:27:39.260 [2024-10-14 16:52:43.806995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.807030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.807040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.813639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.813663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.813676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.820659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.820681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.820690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.826314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.826339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.826350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.832642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.832663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.832672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.838758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.838779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.838787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.844968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.844989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.844997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.851629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.851649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.851658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.857136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.857156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.857164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.862954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.862975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.862983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.868194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.868220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.868228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.874109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.874131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.874139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.879693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.879714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.879722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.885137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.885159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.885167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.260 [2024-10-14 16:52:43.890720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.260 [2024-10-14 16:52:43.890741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.260 [2024-10-14 16:52:43.890750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.896405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.896435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.902200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.902222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.902230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.907263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.907285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.907293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.912127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.912149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.912157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.917833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.917854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.917862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.923040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.923062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.923070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.928487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.928507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.928516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.933954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.933975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.933983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.939365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.939386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.939394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.945035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.945056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.945064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.950810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.950830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.956182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.956203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.956211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.961742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.961768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.961780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.967921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.967942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.967950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.973872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.973892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.973900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.977324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.977343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.977351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.984299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.984319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.984327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.991003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.991023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.991031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:43.998344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:43.998364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:43.998372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.006139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.006159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.006167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.012033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.012053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.012062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.018542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.018570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.018578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.025287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.025309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.025317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.031503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.031525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.031533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.039113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.039135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.039143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.047054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.521 [2024-10-14 16:52:44.047075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.521 [2024-10-14 16:52:44.047083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.521 [2024-10-14 16:52:44.054789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.054810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.054819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.062271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.062293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.062302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.069744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.069768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.069777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.076025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.076047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.076059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.083844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.083866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.083874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.090454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.090476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.090485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.096718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.096740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.096748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.103691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.103712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.103720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.110853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.110875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.110883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.117236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.117258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.117266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.124763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.124784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.124794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.132210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.132231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.132240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.140234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.140260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.140269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.522 [2024-10-14 16:52:44.148444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.522 [2024-10-14 16:52:44.148466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.522 [2024-10-14 16:52:44.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.782 [2024-10-14 16:52:44.156387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.782 [2024-10-14 16:52:44.156409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.782 [2024-10-14 16:52:44.156418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.782 [2024-10-14 16:52:44.164718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.164741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.164749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.172087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.172110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.172119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.179555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.179577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.179585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.187513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.187536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.187544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.194267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.194289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.194297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.201700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.201722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.201730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.208412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.208434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.208442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.214614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.214635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.214644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.221374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.221395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.221403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.228157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.228178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.228187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.235487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.235508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.235517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.242420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.242440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.242449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.248772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.248793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.248802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.256159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.256180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.256189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.262970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.262991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.263003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.268540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.268560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.268568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.273162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.273181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.273189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.278679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.278699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.278708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.284981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.285001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.285009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.292096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.292117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.292125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.299189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.299208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.299217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.306625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.306645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.306653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.313833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.313853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.313862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.321340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.321367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.321376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.328751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.328773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.328781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.336257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.336279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.336287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.342657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.342678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.342686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.349917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.783 [2024-10-14 16:52:44.349939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.783 [2024-10-14 16:52:44.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.783 [2024-10-14 16:52:44.356221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.356242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.356250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.362483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.362504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.362513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.368962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.368984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.368992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.375160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.375181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.375193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.381096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.381117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.381125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.387211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.387232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.387241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.393542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.393563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.393571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.400150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.400171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.400179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.407766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.407787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.407796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:39.784 [2024-10-14 16:52:44.416130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:39.784 [2024-10-14 16:52:44.416153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.784 [2024-10-14 16:52:44.416161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.423022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.423044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.423053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.430043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.430064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.430072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.436839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.436865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.436874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.443450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.443471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.443479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.449382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.449403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.449411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.456220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.456242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.456250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.463137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.463158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.463166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.470628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.470649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.470657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.477699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.477721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.477729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.484760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.484783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.484791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.492044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.492066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.492074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.497749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.497771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.497780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.505549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.505571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.505579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.512833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.512855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.512863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.520429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.520450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.520458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.528213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.528235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.528243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.536534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.536557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.536565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.545111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.545133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.545141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.553193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.553223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.560478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.560499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.560511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.567597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.567623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.567631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.574849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.574872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.574880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.044 [2024-10-14 16:52:44.578936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.044 [2024-10-14 16:52:44.578956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.044 [2024-10-14 16:52:44.578965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.584820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.584841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.584849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.591120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.591142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.591151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.597209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.597229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.597237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.603737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.603759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.603767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.610064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.610085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.610094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.616774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.616798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.616807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.623613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.623634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.623642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.630071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.630092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.630101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.636026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.636047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.636055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.643045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.643066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.643075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.649832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.649853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.649862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.656919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.656940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.656949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.663690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.663711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.663719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.670224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.670245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.670254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.045 [2024-10-14 16:52:44.676692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.045 [2024-10-14 16:52:44.676712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.045 [2024-10-14 16:52:44.676720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.683303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.683326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.683335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.689754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.689776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.689784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.695694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.695714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.695722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.702034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.702062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.708017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.708038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.708046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.714172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.714193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.714201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.720746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.720767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.720777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.728540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.728562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.736552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.736574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.736583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.744628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.744649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.744657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.751536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.751558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.751566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.758858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.758880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.758888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.766504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.766526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.766535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.773998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.774020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.774028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.781530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.781551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.781559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.788683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.788704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.788712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.795952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.795973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.795981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 4626.00 IOPS, 578.25 MiB/s [2024-10-14T14:52:44.939Z] [2024-10-14 16:52:44.804702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.804723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.804732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.811753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.811774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.811782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.818957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.818978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.818987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.826684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.826706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.826715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.834872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.834894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.834903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.842104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.842126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.842134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.849289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.849311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.849320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.855317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.855339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.855352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.861827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.861849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.305 [2024-10-14 16:52:44.861858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.305 [2024-10-14 16:52:44.867839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.305 [2024-10-14 16:52:44.867859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.867868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.873685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.873705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.873713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.879671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.879691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.879700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.885460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.885480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.885488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.891188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.891210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.891218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.896880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.896900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.896908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.902597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.902623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.902631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.908410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.908577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.908585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.914524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.914546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.914554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.921214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.921235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.921243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.929110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.929131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.306 [2024-10-14 16:52:44.936495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.306 [2024-10-14 16:52:44.936516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.306 [2024-10-14 16:52:44.936524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.944486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.944508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.944517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.952344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.952368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.952377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.959881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.959903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.959911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.966409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.966429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.966437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.973392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.973414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.973423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.979790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.979812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.979820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.985684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.985706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.985714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.991454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.991477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.991485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.996522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.996545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.996553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:44.999890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:44.999910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:44.999918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.005592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.005622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.005631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.012015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.012036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.012044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.019244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.019266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.019278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.025952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.025974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.025983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.033526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.033547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.033555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.040976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.040997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.041006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.048889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.048910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.048918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.056260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.056281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.063589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.063616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.063625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.071361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.071382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.071391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.079334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.079366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.087351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.087376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.087385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.094640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.094661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.094669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.101867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.101888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.101896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.109160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.109181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.109189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.116640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.116661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.116669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.123173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.123194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.566 [2024-10-14 16:52:45.123202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.566 [2024-10-14 16:52:45.129746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.566 [2024-10-14 16:52:45.129767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.129775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.136133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.136154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.136162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.142130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.142150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.148269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.148289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.155321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.155342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.155350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.162787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.162808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.162817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.169879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.169901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.177451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.177472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.177480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.185011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.185033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.185041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.192282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.192306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.192314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.567 [2024-10-14 16:52:45.199829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.567 [2024-10-14 16:52:45.199851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.567 [2024-10-14 16:52:45.199860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.206903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.206930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.206939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.214301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.214324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.214332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.221141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.221163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.221171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.228329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.228351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.228359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.235009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.235030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.235038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.242093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.242114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.242123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.248712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.248734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.248742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.255267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.255288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.255296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.262153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.262174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.262183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.267592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.267620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.267629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.273860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.827 [2024-10-14 16:52:45.273883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.827 [2024-10-14 16:52:45.273891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.827 [2024-10-14 16:52:45.280912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.280933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.280942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.287680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.287703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.287711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.293930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.293951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.293959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.300647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.300669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.300677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.307108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.307130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.307138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.314193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.314215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.314223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.320949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.320970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.320983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.327643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.327664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.327672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.334446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.334470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.334479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.341590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.341621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.341630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.349646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.349668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.349677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.357783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.357805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.357813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.365488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.365511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.365520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.372978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.373001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.373010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.380149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.380171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.380179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.387262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.387287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.387295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.393700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.393721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.393730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.400355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.400376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.400385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.407534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.407555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.407563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.413625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.413646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.413654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.417484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.417503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.417511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.422939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.422961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.422969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.429675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.429695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.429703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.436785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.436805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.436813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.443458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.828 [2024-10-14 16:52:45.443479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.828 [2024-10-14 16:52:45.443486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.828 [2024-10-14 16:52:45.449964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.829 [2024-10-14 16:52:45.449985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.829 [2024-10-14 16:52:45.449993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.829 [2024-10-14 16:52:45.456819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:40.829 [2024-10-14 16:52:45.456840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.829 [2024-10-14 16:52:45.456848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.464080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.464102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.464111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.470439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.470460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.470469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.477021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.477043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.477052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.482918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.482940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.482949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.490085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.490107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.490115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.496233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.496254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.496266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.503198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.503218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.503226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.510320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.510341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.510349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.517290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.517311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.517319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.523832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.523853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.523861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.530284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.089 [2024-10-14 16:52:45.530305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.089 [2024-10-14 16:52:45.530313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.089 [2024-10-14 16:52:45.536705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.536726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.536734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.543518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.543539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.543547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.550011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.550033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.550042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.556715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.556740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.556748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.562788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.562809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.562817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.569560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.569581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.569590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.576678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.576699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.576708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.583566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.583586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.583594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.590763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.590787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.590795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.597824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.597845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.597854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.605249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.605278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.612345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.612366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.612374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.619836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.619858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.619865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.627127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.627149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.627157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.634768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.634790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.634798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.642222] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.642243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.642251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.648819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.648840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.648848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.655557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.655578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.655587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.661951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.661972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.661979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.668752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.668773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.668781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.675003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.675028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.675036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.680984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.681005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.681013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.687273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.687294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.687302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.693800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.693821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.693830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.700969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.700990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.700998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.707672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.707693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.707702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.714914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.714935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.714943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.090 [2024-10-14 16:52:45.722046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.090 [2024-10-14 16:52:45.722068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.090 [2024-10-14 16:52:45.722076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.728998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.729020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.729028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.736364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.736385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.736393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.743552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.743573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.743581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.750316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.750337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.750345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.756785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.756805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.756813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.763685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.763719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.763727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.770343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.770365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.770374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.778046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.778067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.778075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.785446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.792321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.792358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.350 [2024-10-14 16:52:45.799721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2101460) 00:27:41.350 [2024-10-14 16:52:45.799742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.350 [2024-10-14 16:52:45.799751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.350 4595.50 IOPS, 574.44 MiB/s 00:27:41.350 Latency(us) 00:27:41.350 [2024-10-14T14:52:45.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.350 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:41.350 nvme0n1 : 2.00 4595.63 574.45 0.00 0.00 3478.91 620.25 8675.72 00:27:41.350 [2024-10-14T14:52:45.984Z] =================================================================================================================== 00:27:41.350 [2024-10-14T14:52:45.984Z] Total : 4595.63 574.45 0.00 0.00 3478.91 620.25 8675.72 00:27:41.350 { 00:27:41.350 "results": [ 00:27:41.350 { 00:27:41.350 "job": "nvme0n1", 00:27:41.350 "core_mask": "0x2", 00:27:41.350 "workload": "randread", 00:27:41.350 "status": "finished", 00:27:41.350 "queue_depth": 16, 00:27:41.350 "io_size": 131072, 00:27:41.350 "runtime": 2.003423, 00:27:41.350 "iops": 4595.6345714309955, 00:27:41.350 "mibps": 574.4543214288744, 00:27:41.350 "io_failed": 0, 00:27:41.350 "io_timeout": 0, 00:27:41.350 "avg_latency_us": 3478.9137788535636, 00:27:41.350 "min_latency_us": 620.2514285714286, 00:27:41.350 "max_latency_us": 8675.718095238095 00:27:41.350 } 00:27:41.350 ], 00:27:41.350 "core_count": 1 00:27:41.350 } 00:27:41.350 16:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:41.350 16:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:41.350 16:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:41.350 | .driver_specific 00:27:41.350 | .nvme_error 00:27:41.350 | .status_code 00:27:41.350 | .command_transient_transport_error' 00:27:41.350 16:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 296 > 0 )) 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 687674 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 687674 ']' 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 687674 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 687674 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 687674' 00:27:41.610 killing process with pid 687674 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 687674 00:27:41.610 Received shutdown signal, test time was about 2.000000 seconds 00:27:41.610 00:27:41.610 Latency(us) 00:27:41.610 [2024-10-14T14:52:46.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.610 [2024-10-14T14:52:46.244Z] =================================================================================================================== 00:27:41.610 [2024-10-14T14:52:46.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 687674 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:41.610 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=688145 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 688145 /var/tmp/bperf.sock 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 688145 ']' 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:41.870 [2024-10-14 16:52:46.290960] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:41.870 [2024-10-14 16:52:46.291008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688145 ] 00:27:41.870 [2024-10-14 16:52:46.361301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.870 [2024-10-14 16:52:46.398079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:41.870 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.129 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:42.129 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.129 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.129 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.129 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.129 16:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.698 nvme0n1 00:27:42.698 16:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:42.698 16:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.698 16:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.698 16:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.698 16:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:42.698 16:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.698 Running I/O for 2 seconds... 00:27:42.698 [2024-10-14 16:52:47.243770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ee5c8 00:27:42.698 [2024-10-14 16:52:47.244455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.244485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.252987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e95a0 00:27:42.698 [2024-10-14 16:52:47.253673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.253696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.262226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fac10 00:27:42.698 [2024-10-14 16:52:47.263107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.263128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.271845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6458 00:27:42.698 [2024-10-14 16:52:47.272822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.272841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.281284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eaab8 00:27:42.698 [2024-10-14 16:52:47.282084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.282104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.289922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6738 00:27:42.698 [2024-10-14 16:52:47.291323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.291340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.297699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe720 00:27:42.698 [2024-10-14 16:52:47.298402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.698 [2024-10-14 16:52:47.298424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:42.698 [2024-10-14 16:52:47.308933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eea00 00:27:42.699 [2024-10-14 16:52:47.310271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.699 [2024-10-14 16:52:47.310289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:42.699 [2024-10-14 16:52:47.317383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e7818 00:27:42.699 [2024-10-14 16:52:47.318253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.699 [2024-10-14 16:52:47.318272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:42.699 [2024-10-14 16:52:47.326588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe2e8 00:27:42.699 [2024-10-14 16:52:47.327672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.699 [2024-10-14 16:52:47.327691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.337078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fef90 00:27:42.959 [2024-10-14 16:52:47.338637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.338655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.343529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6fa8 00:27:42.959 [2024-10-14 16:52:47.344255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.344273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.352105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e38d0 00:27:42.959 [2024-10-14 16:52:47.352800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.352818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.361578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ed4e8 00:27:42.959 [2024-10-14 16:52:47.362401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.362420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.371712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f46d0 00:27:42.959 [2024-10-14 16:52:47.372579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.372597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.381016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e23b8 00:27:42.959 [2024-10-14 16:52:47.382098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.382117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.390368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f8e88 00:27:42.959 [2024-10-14 16:52:47.391355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.391374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.399556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166dece0 00:27:42.959 [2024-10-14 16:52:47.400766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.400784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.406490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f5378 00:27:42.959 [2024-10-14 16:52:47.407204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.407222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.417968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ebfd0 00:27:42.959 [2024-10-14 16:52:47.419277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.419295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.427514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e23b8 00:27:42.959 [2024-10-14 16:52:47.428886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.428903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.436989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166feb58 00:27:42.959 [2024-10-14 16:52:47.438482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.438500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.443355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f5378 00:27:42.959 [2024-10-14 16:52:47.443958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.443977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.452793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ecc78 00:27:42.959 [2024-10-14 16:52:47.453619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.453639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.461403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ebb98 00:27:42.959 [2024-10-14 16:52:47.462186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.462205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.470943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166efae0 00:27:42.959 [2024-10-14 16:52:47.471849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.471867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.480363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3e60 00:27:42.959 [2024-10-14 16:52:47.481385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.481403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.489888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fd640 00:27:42.959 [2024-10-14 16:52:47.491018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.491036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.499307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ebb98 00:27:42.959 [2024-10-14 16:52:47.500627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.500651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.507936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6020 00:27:42.959 [2024-10-14 16:52:47.509345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.509365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.516085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3e60 00:27:42.959 [2024-10-14 16:52:47.516776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.516795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.525739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ee5c8 00:27:42.959 [2024-10-14 16:52:47.526563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.526581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.535288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4298 00:27:42.959 [2024-10-14 16:52:47.536195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.536214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.545340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ed0b0 00:27:42.959 [2024-10-14 16:52:47.546310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.546328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.554635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e1b48 00:27:42.959 [2024-10-14 16:52:47.555814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.555833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.561389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f81e0 00:27:42.959 [2024-10-14 16:52:47.562051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.562069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.570874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f96f8 00:27:42.959 [2024-10-14 16:52:47.571645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.571663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.580006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f8618 00:27:42.959 [2024-10-14 16:52:47.580784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.580803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:42.959 [2024-10-14 16:52:47.589452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6890 00:27:42.959 [2024-10-14 16:52:47.590206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:42.959 [2024-10-14 16:52:47.590224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.598048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e27f0 00:27:43.219 [2024-10-14 16:52:47.598884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.598903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.608145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eea00 00:27:43.219 [2024-10-14 16:52:47.609055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.609074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.617255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fb8b8 00:27:43.219 [2024-10-14 16:52:47.618183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.618204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.626282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e23b8 00:27:43.219 [2024-10-14 16:52:47.627203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.627220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.635317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e12d8 00:27:43.219 [2024-10-14 16:52:47.636252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.636270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.644306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f7538 00:27:43.219 [2024-10-14 16:52:47.645236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.645254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.653292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fd640 00:27:43.219 [2024-10-14 16:52:47.654213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.662336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e99d8 00:27:43.219 [2024-10-14 16:52:47.663320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.663339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.671352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f0ff8 00:27:43.219 [2024-10-14 16:52:47.672279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.672297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.680319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166df550 00:27:43.219 [2024-10-14 16:52:47.681247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.681265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.689384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f35f0 00:27:43.219 [2024-10-14 16:52:47.690304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.690322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.698355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fd208 00:27:43.219 [2024-10-14 16:52:47.699284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.699302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.707404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e4de8 00:27:43.219 [2024-10-14 16:52:47.708341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.708359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.716449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166edd58 00:27:43.219 [2024-10-14 16:52:47.717376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.717394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.725478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ddc00 00:27:43.219 [2024-10-14 16:52:47.726407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.726426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.734541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ec408 00:27:43.219 [2024-10-14 16:52:47.735471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.735489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.743558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6b70 00:27:43.219 [2024-10-14 16:52:47.744486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.744504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.752517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166feb58 00:27:43.219 [2024-10-14 16:52:47.753480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.219 [2024-10-14 16:52:47.753498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.219 [2024-10-14 16:52:47.761771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe2e8 00:27:43.220 [2024-10-14 16:52:47.762734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.762756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.770874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ea680 00:27:43.220 [2024-10-14 16:52:47.771832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.771850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.779999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e1710 00:27:43.220 [2024-10-14 16:52:47.780923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.780995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.789454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e38d0 00:27:43.220 [2024-10-14 16:52:47.790178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.790197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.798657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fcdd0 00:27:43.220 [2024-10-14 16:52:47.799672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.799691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.806932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6738 00:27:43.220 [2024-10-14 16:52:47.808261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.808280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.815287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f46d0 00:27:43.220 [2024-10-14 16:52:47.815885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.815904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.824726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eea00 00:27:43.220 [2024-10-14 16:52:47.825548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.825566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.834180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fb480 00:27:43.220 [2024-10-14 16:52:47.835198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.835217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.843691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f1430 00:27:43.220 [2024-10-14 16:52:47.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.844871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.220 [2024-10-14 16:52:47.852324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6020 00:27:43.220 [2024-10-14 16:52:47.853146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.220 [2024-10-14 16:52:47.853167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.861309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f7538 00:27:43.478 [2024-10-14 16:52:47.862090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.862109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.870422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e5658 00:27:43.478 [2024-10-14 16:52:47.871212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.871230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.879481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f57b0 00:27:43.478 [2024-10-14 16:52:47.880308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.880326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.888520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f92c0 00:27:43.478 [2024-10-14 16:52:47.889230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.889248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.898034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e1b48 00:27:43.478 [2024-10-14 16:52:47.899043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.899061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.906426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fc998 00:27:43.478 [2024-10-14 16:52:47.907097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.907115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.915297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3498 00:27:43.478 [2024-10-14 16:52:47.915975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.478 [2024-10-14 16:52:47.915993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.478 [2024-10-14 16:52:47.924298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e73e0 00:27:43.478 [2024-10-14 16:52:47.924899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.924917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.934504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6020 00:27:43.479 [2024-10-14 16:52:47.935653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.935671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.944014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3e60 00:27:43.479 [2024-10-14 16:52:47.945196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.945214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.953450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ed920 00:27:43.479 [2024-10-14 16:52:47.954772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.962909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f5378 00:27:43.479 [2024-10-14 16:52:47.964396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.964415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.969354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166dfdc0 00:27:43.479 [2024-10-14 16:52:47.970015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.970034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.978527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e5a90 00:27:43.479 [2024-10-14 16:52:47.979212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.979232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.987592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f7970 00:27:43.479 [2024-10-14 16:52:47.988283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.988301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:47.996634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ddc00 00:27:43.479 [2024-10-14 16:52:47.997333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:47.997352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.005665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166edd58 00:27:43.479 [2024-10-14 16:52:48.006373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.006393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.014908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3d08 00:27:43.479 [2024-10-14 16:52:48.015605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.015627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.024299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eb328 00:27:43.479 [2024-10-14 16:52:48.025054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.025074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.033589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f0bc0 00:27:43.479 [2024-10-14 16:52:48.034427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.034446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.042652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f9b30 00:27:43.479 [2024-10-14 16:52:48.043443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.043462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.051711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ff3c8 00:27:43.479 [2024-10-14 16:52:48.052519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.060782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e8d30 00:27:43.479 [2024-10-14 16:52:48.061580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.061599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.069795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fdeb0 00:27:43.479 [2024-10-14 16:52:48.070574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.070594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.078896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3e60 00:27:43.479 [2024-10-14 16:52:48.079691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.079710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.087334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e5658 00:27:43.479 [2024-10-14 16:52:48.088051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.088074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.096763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f31b8 00:27:43.479 [2024-10-14 16:52:48.097594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.097618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:43.479 [2024-10-14 16:52:48.106232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e84c0 00:27:43.479 [2024-10-14 16:52:48.107176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.479 [2024-10-14 16:52:48.107195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.115815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ec408 00:27:43.738 [2024-10-14 16:52:48.116973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.116992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.124295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e5658 00:27:43.738 [2024-10-14 16:52:48.125081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.125099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.133226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f35f0 00:27:43.738 [2024-10-14 16:52:48.134037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.134057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.142525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e8d30 00:27:43.738 [2024-10-14 16:52:48.143122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.151988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f0bc0 00:27:43.738 [2024-10-14 16:52:48.152711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.152730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.160546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166feb58 00:27:43.738 [2024-10-14 16:52:48.161895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.161913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.168340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe2e8 00:27:43.738 [2024-10-14 16:52:48.168998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.169017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.177515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3d08 00:27:43.738 [2024-10-14 16:52:48.178166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.178184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.186801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f1ca0 00:27:43.738 [2024-10-14 16:52:48.187447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.187466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.195779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3d08 00:27:43.738 [2024-10-14 16:52:48.196474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.196492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.204967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fef90 00:27:43.738 [2024-10-14 16:52:48.205731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.205750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.214235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ebfd0 00:27:43.738 [2024-10-14 16:52:48.214998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.215018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.223436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f0bc0 00:27:43.738 [2024-10-14 16:52:48.224219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.232504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ff3c8 00:27:43.738 [2024-10-14 16:52:48.233577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.233595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:43.738 28010.00 IOPS, 109.41 MiB/s [2024-10-14T14:52:48.372Z] [2024-10-14 16:52:48.240918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4f40 00:27:43.738 [2024-10-14 16:52:48.241729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.241748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.250424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f57b0 00:27:43.738 [2024-10-14 16:52:48.251307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.251325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.260426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f0bc0 00:27:43.738 [2024-10-14 16:52:48.261478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.738 [2024-10-14 16:52:48.261503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.738 [2024-10-14 16:52:48.269667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e4de8 00:27:43.738 [2024-10-14 16:52:48.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.270760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.278786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e0ea0 00:27:43.739 [2024-10-14 16:52:48.279856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.279876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.288010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6b70 00:27:43.739 [2024-10-14 16:52:48.289063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.289082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.297085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ec408 00:27:43.739 [2024-10-14 16:52:48.298110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.298129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.306148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f2948 00:27:43.739 [2024-10-14 16:52:48.307171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.307190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.315234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166df988 00:27:43.739 [2024-10-14 16:52:48.316258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.316277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.324251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e0a68 00:27:43.739 [2024-10-14 16:52:48.325278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.325300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.333300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f92c0 00:27:43.739 [2024-10-14 16:52:48.334328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.334347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.342416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e8d30 00:27:43.739 [2024-10-14 16:52:48.343445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.343463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.351479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fdeb0 00:27:43.739 [2024-10-14 16:52:48.352508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.352526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.360534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fc128 00:27:43.739 [2024-10-14 16:52:48.361561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.361579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.739 [2024-10-14 16:52:48.369582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f1868 00:27:43.739 [2024-10-14 16:52:48.370646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.739 [2024-10-14 16:52:48.370665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.379907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe720 00:27:43.999 [2024-10-14 16:52:48.381369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.381387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.386254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e0630 00:27:43.999 [2024-10-14 16:52:48.386894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.386912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.395242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ed4e8 00:27:43.999 [2024-10-14 16:52:48.395996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.396014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.405303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e1f80 00:27:43.999 [2024-10-14 16:52:48.406194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.406213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.414759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e4de8 00:27:43.999 [2024-10-14 16:52:48.415755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.415774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.423904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6738 00:27:43.999 [2024-10-14 16:52:48.424855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.424874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.432908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3a28 00:27:43.999 [2024-10-14 16:52:48.433922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.433942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.441988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ebfd0 00:27:43.999 [2024-10-14 16:52:48.443008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.443027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.451061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6fa8 00:27:43.999 [2024-10-14 16:52:48.452075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.452093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.460089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e12d8 00:27:43.999 [2024-10-14 16:52:48.461108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.461127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.469126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f57b0 00:27:43.999 [2024-10-14 16:52:48.470065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.470083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.478166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4298 00:27:43.999 [2024-10-14 16:52:48.479146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.479166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.487332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fb480 00:27:43.999 [2024-10-14 16:52:48.488392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.488411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.496489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e88f8 00:27:43.999 [2024-10-14 16:52:48.497552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.497571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.505613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fac10 00:27:43.999 [2024-10-14 16:52:48.506627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.506645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.514641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ed0b0 00:27:43.999 [2024-10-14 16:52:48.515687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.515707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.523876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4f40 00:27:43.999 [2024-10-14 16:52:48.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.524959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.532999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fbcf0 00:27:43.999 [2024-10-14 16:52:48.534063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.534082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.542304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f31b8 00:27:43.999 [2024-10-14 16:52:48.543347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.543366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.551331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4b08 00:27:43.999 [2024-10-14 16:52:48.552354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.552372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.560362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e5220 00:27:43.999 [2024-10-14 16:52:48.561386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.561404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.569340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f0788 00:27:43.999 [2024-10-14 16:52:48.570363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.570381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.578360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ea248 00:27:43.999 [2024-10-14 16:52:48.579385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.579403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:43.999 [2024-10-14 16:52:48.587355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6cc8 00:27:43.999 [2024-10-14 16:52:48.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:43.999 [2024-10-14 16:52:48.588400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.000 [2024-10-14 16:52:48.596349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3060 00:27:44.000 [2024-10-14 16:52:48.597371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.000 [2024-10-14 16:52:48.597390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.000 [2024-10-14 16:52:48.605409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f7100 00:27:44.000 [2024-10-14 16:52:48.606429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.000 [2024-10-14 16:52:48.606447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.000 [2024-10-14 16:52:48.614454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e5ec8 00:27:44.000 [2024-10-14 16:52:48.615473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.000 [2024-10-14 16:52:48.615491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.000 [2024-10-14 16:52:48.623523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fb8b8 00:27:44.000 [2024-10-14 16:52:48.624535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.000 [2024-10-14 16:52:48.624554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.000 [2024-10-14 16:52:48.632688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eb760 00:27:44.000 [2024-10-14 16:52:48.633744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.000 [2024-10-14 16:52:48.633764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.641818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eea00 00:27:44.259 [2024-10-14 16:52:48.642841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.642862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.650113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166dfdc0 00:27:44.259 [2024-10-14 16:52:48.651407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.651426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.658453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3a28 00:27:44.259 [2024-10-14 16:52:48.659117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.659135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.667464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f5be8 00:27:44.259 [2024-10-14 16:52:48.668064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.668082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.676872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e4140 00:27:44.259 [2024-10-14 16:52:48.677620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.677639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.686023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f8e88 00:27:44.259 [2024-10-14 16:52:48.686750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.686768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.695385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3060 00:27:44.259 [2024-10-14 16:52:48.696252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.696270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.705679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6738 00:27:44.259 [2024-10-14 16:52:48.707008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.707027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.715114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fb480 00:27:44.259 [2024-10-14 16:52:48.716632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.716651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.723005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fd208 00:27:44.259 [2024-10-14 16:52:48.723651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.259 [2024-10-14 16:52:48.723672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:44.259 [2024-10-14 16:52:48.732594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e4de8 00:27:44.260 [2024-10-14 16:52:48.733385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.733405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.741198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e23b8 00:27:44.260 [2024-10-14 16:52:48.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.741959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.751009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4b08 00:27:44.260 [2024-10-14 16:52:48.751562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.751581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.760114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e8088 00:27:44.260 [2024-10-14 16:52:48.760896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.760914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.769376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e0ea0 00:27:44.260 [2024-10-14 16:52:48.770063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.770082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.778126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4f40 00:27:44.260 [2024-10-14 16:52:48.779115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.779136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.789676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6890 00:27:44.260 [2024-10-14 16:52:48.791291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.791311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.796191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ed0b0 00:27:44.260 [2024-10-14 16:52:48.796925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.796944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.805521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e6300 00:27:44.260 [2024-10-14 16:52:48.806037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.806056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.814969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fef90 00:27:44.260 [2024-10-14 16:52:48.815597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.815621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.824411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166efae0 00:27:44.260 [2024-10-14 16:52:48.825166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.825185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.832919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fc560 00:27:44.260 [2024-10-14 16:52:48.834334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.834352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.842889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ec408 00:27:44.260 [2024-10-14 16:52:48.844083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.844102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.849630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166dece0 00:27:44.260 [2024-10-14 16:52:48.850318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.850336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.859392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fa7d8 00:27:44.260 [2024-10-14 16:52:48.859911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.859930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.868028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f9b30 00:27:44.260 [2024-10-14 16:52:48.868812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.868829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.877449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe2e8 00:27:44.260 [2024-10-14 16:52:48.878355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.878377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:44.260 [2024-10-14 16:52:48.886893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fd208 00:27:44.260 [2024-10-14 16:52:48.887965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.260 [2024-10-14 16:52:48.887984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.895915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f5378 00:27:44.521 [2024-10-14 16:52:48.897337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.897356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.905541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eb760 00:27:44.521 [2024-10-14 16:52:48.906668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.906686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.914966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f9f68 00:27:44.521 [2024-10-14 16:52:48.916215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.916233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.924404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166efae0 00:27:44.521 [2024-10-14 16:52:48.925770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.925788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.931577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f6020 00:27:44.521 [2024-10-14 16:52:48.932465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.932483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.941388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f9f68 00:27:44.521 [2024-10-14 16:52:48.942083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.942102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.949895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e01f8 00:27:44.521 [2024-10-14 16:52:48.951198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.951226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.957683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f4f40 00:27:44.521 [2024-10-14 16:52:48.958265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.958283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.967137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f1868 00:27:44.521 [2024-10-14 16:52:48.967837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.967856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.976578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ee190 00:27:44.521 [2024-10-14 16:52:48.977389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.977407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.986005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ee5c8 00:27:44.521 [2024-10-14 16:52:48.986982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.987001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:48.995205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ec408 00:27:44.521 [2024-10-14 16:52:48.995787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:48.995806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.004616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f1ca0 00:27:44.521 [2024-10-14 16:52:49.005310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.005328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.013840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fe2e8 00:27:44.521 [2024-10-14 16:52:49.014787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.014805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.022190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ee190 00:27:44.521 [2024-10-14 16:52:49.023782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.023805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.032262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f3a28 00:27:44.521 [2024-10-14 16:52:49.033425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.033446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.040973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e01f8 00:27:44.521 [2024-10-14 16:52:49.042135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.042154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.050498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166ebb98 00:27:44.521 [2024-10-14 16:52:49.051748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.051767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.057674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f57b0 00:27:44.521 [2024-10-14 16:52:49.058419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.058436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.068875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166df988 00:27:44.521 [2024-10-14 16:52:49.070206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.070223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.076043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f8618 00:27:44.521 [2024-10-14 16:52:49.076896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.076914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.085841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166dfdc0 00:27:44.521 [2024-10-14 16:52:49.086516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.086535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.094446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eff18 00:27:44.521 [2024-10-14 16:52:49.095749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.521 [2024-10-14 16:52:49.095767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:44.521 [2024-10-14 16:52:49.104463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f31b8 00:27:44.521 [2024-10-14 16:52:49.105483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.522 [2024-10-14 16:52:49.105502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:44.522 [2024-10-14 16:52:49.111143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e73e0 00:27:44.522 [2024-10-14 16:52:49.111765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.522 [2024-10-14 16:52:49.111787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:44.522 [2024-10-14 16:52:49.122845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e0630 00:27:44.522 [2024-10-14 16:52:49.124074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.522 [2024-10-14 16:52:49.124093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.522 [2024-10-14 16:52:49.129568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166dece0 00:27:44.522 [2024-10-14 16:52:49.130271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.522 [2024-10-14 16:52:49.130289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:44.522 [2024-10-14 16:52:49.140760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f46d0 00:27:44.522 [2024-10-14 16:52:49.142047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.522 [2024-10-14 16:52:49.142064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.522 [2024-10-14 16:52:49.147930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f7970 00:27:44.522 [2024-10-14 16:52:49.148741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.522 [2024-10-14 16:52:49.148759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.157833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fbcf0 00:27:44.783 [2024-10-14 16:52:49.158464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.158482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.167347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f46d0 00:27:44.783 [2024-10-14 16:52:49.168084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.168102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.175826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166f20d8 00:27:44.783 [2024-10-14 16:52:49.177209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.177227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.185768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e01f8 00:27:44.783 [2024-10-14 16:52:49.186882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.186900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.193428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e3d08 00:27:44.783 [2024-10-14 16:52:49.193926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.193945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.202867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166eea00 00:27:44.783 [2024-10-14 16:52:49.203475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.203494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.212324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fc998 00:27:44.783 [2024-10-14 16:52:49.213094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.213112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.220853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166fac10 00:27:44.783 [2024-10-14 16:52:49.222257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.222275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:44.783 [2024-10-14 16:52:49.230802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcf5c0) with pdu=0x2000166e1b48 00:27:44.783 [2024-10-14 16:52:49.231984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.783 [2024-10-14 16:52:49.232002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.783 28118.50 IOPS, 109.84 MiB/s 00:27:44.783 Latency(us) 00:27:44.783 [2024-10-14T14:52:49.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.783 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:44.783 nvme0n1 : 2.00 28107.85 109.80 0.00 0.00 4547.87 2309.36 12358.22 00:27:44.783 [2024-10-14T14:52:49.417Z] =================================================================================================================== 00:27:44.783 [2024-10-14T14:52:49.417Z] Total : 28107.85 109.80 0.00 0.00 4547.87 2309.36 12358.22 00:27:44.783 { 00:27:44.783 "results": [ 00:27:44.783 { 00:27:44.783 "job": "nvme0n1", 00:27:44.783 "core_mask": "0x2", 00:27:44.783 "workload": "randwrite", 00:27:44.783 "status": "finished", 00:27:44.783 "queue_depth": 128, 00:27:44.783 "io_size": 4096, 00:27:44.783 "runtime": 2.00307, 00:27:44.783 "iops": 28107.854443429336, 00:27:44.783 "mibps": 109.79630641964584, 00:27:44.783 "io_failed": 0, 00:27:44.784 "io_timeout": 0, 00:27:44.784 "avg_latency_us": 4547.872904455733, 00:27:44.784 "min_latency_us": 2309.3638095238093, 00:27:44.784 "max_latency_us": 12358.217142857144 00:27:44.784 } 00:27:44.784 ], 00:27:44.784 "core_count": 1 00:27:44.784 } 00:27:44.784 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:44.784 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:44.784 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:44.784 | .driver_specific 00:27:44.784 | .nvme_error 00:27:44.784 | .status_code 00:27:44.784 | .command_transient_transport_error' 00:27:44.784 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 688145 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 688145 ']' 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 688145 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 688145 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 688145' 00:27:45.043 killing process with pid 688145 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 688145 00:27:45.043 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.043 00:27:45.043 Latency(us) 00:27:45.043 [2024-10-14T14:52:49.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.043 [2024-10-14T14:52:49.677Z] =================================================================================================================== 00:27:45.043 [2024-10-14T14:52:49.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 688145 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=688833 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 688833 /var/tmp/bperf.sock 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 688833 ']' 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:45.043 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.302 [2024-10-14 16:52:49.710348] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:45.302 [2024-10-14 16:52:49.710392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688833 ] 00:27:45.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:45.302 Zero copy mechanism will not be used. 00:27:45.302 [2024-10-14 16:52:49.778431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.302 [2024-10-14 16:52:49.820256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.302 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:45.302 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:45.302 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.302 16:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.561 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:45.561 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.561 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.561 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.561 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.561 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.820 nvme0n1 00:27:46.080 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:46.080 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.080 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.080 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.080 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:46.080 16:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.080 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:46.080 Zero copy mechanism will not be used. 00:27:46.080 Running I/O for 2 seconds... 00:27:46.080 [2024-10-14 16:52:50.566164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.566416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-10-14 16:52:50.566444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.080 [2024-10-14 16:52:50.570993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.571234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-10-14 16:52:50.571258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.080 [2024-10-14 16:52:50.575608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.575852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-10-14 16:52:50.575873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.080 [2024-10-14 16:52:50.580165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.580394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-10-14 16:52:50.580414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.080 [2024-10-14 16:52:50.584657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.584902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-10-14 16:52:50.584923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.080 [2024-10-14 16:52:50.589305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.589543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.080 [2024-10-14 16:52:50.589563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.080 [2024-10-14 16:52:50.593919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.080 [2024-10-14 16:52:50.594160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.594181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.598470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.598704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.598724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.602847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.603090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.603110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.607258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.607487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.607507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.611659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.611890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.611910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.616038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.616279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.616303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.620455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.620686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.620706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.624733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.624990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.629263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.629489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.629509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.633877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.634104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.634124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.639761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.639829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.639847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.647259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.647340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.647359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.654328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.654588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.654617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.660583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.660836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.660856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.666004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.666265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.666285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.671870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.672112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.672133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.677099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.677343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.677363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.682040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.682277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.687302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.687529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.687549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.692609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.692851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.692872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.697678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.697909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.697929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.702941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.703168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.703188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.708045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.708280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.708300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.081 [2024-10-14 16:52:50.713173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.081 [2024-10-14 16:52:50.713417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.081 [2024-10-14 16:52:50.713438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.718301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.341 [2024-10-14 16:52:50.718535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-10-14 16:52:50.718556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.723939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.341 [2024-10-14 16:52:50.724168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-10-14 16:52:50.724188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.729327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.341 [2024-10-14 16:52:50.729558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-10-14 16:52:50.729578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.734670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.341 [2024-10-14 16:52:50.734898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-10-14 16:52:50.734918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.739880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.341 [2024-10-14 16:52:50.740109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-10-14 16:52:50.740129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.744612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.341 [2024-10-14 16:52:50.744746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.341 [2024-10-14 16:52:50.744764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.341 [2024-10-14 16:52:50.750333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.750574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.750594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.755844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.756073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.756097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.761052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.761280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.761299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.766225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.766461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.766480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.771498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.771768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.776750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.776978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.776998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.782204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.782442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.782462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.787353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.787578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.787598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.792091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.792328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.792348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.796936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.797162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.797181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.802031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.802263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.802283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.807180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.807412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.807432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.812137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.812362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.812382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.817183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.817425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.817445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.822133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.822369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.822395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.827701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.827958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.827981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.832751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.832996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.833017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.837798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.838040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.838061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.842778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.843022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.843043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.847787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.848033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.848053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.852666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.852896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.852917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.857223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.857451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.857471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.861902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.862129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.862149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.866535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.866770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.866790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.872712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.872873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.872891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.879238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.879464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.884523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.884758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.884779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.889975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.890198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.342 [2024-10-14 16:52:50.890222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.342 [2024-10-14 16:52:50.895510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.342 [2024-10-14 16:52:50.895742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.895763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.900938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.901164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.901184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.906998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.907226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.907245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.912067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.912295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.912315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.917383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.917614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.917634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.921812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.922040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.922059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.926251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.926476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.926495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.930724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.930953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.930972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.935142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.935368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.935388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.939591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.939858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.939878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.944218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.944449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.944469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.948861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.949096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.949116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.953927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.954159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.954178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.958996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.959251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.959271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.964313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.964547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.964567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.970209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.970453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.970472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.343 [2024-10-14 16:52:50.975226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.343 [2024-10-14 16:52:50.975456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.343 [2024-10-14 16:52:50.975480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:50.980032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:50.980275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:50.980296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:50.984820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:50.985063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:50.985084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:50.991369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:50.991629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:50.991650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:50.996710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:50.996953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:50.996973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:51.001619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:51.001868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:51.001888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:51.006269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:51.006502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:51.006523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:51.010728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:51.010962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.603 [2024-10-14 16:52:51.010982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.603 [2024-10-14 16:52:51.015176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.603 [2024-10-14 16:52:51.015413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.015433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.019973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.020221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.020241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.024718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.024963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.024984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.029513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.029763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.029784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.034339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.034582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.034611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.039092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.039371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.039393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.043663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.043925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.043946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.048274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.048505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.048525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.052884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.053138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.053158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.057393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.057643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.057663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.062204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.062447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.062468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.066992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.067248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.067269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.071746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.072034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.072063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.076436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.076689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.076712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.080932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.081175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.081197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.085352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.085586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.085613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.089654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.089887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.089907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.093979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.094212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.094232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.098324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.098555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.102634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.102878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.102898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.106928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.107191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.111166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.111394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.111413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.115388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.115647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.115668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.119675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.119902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.119922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.123910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.124136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.128171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.128412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.128432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.132413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.132651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.132670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.136617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.136849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.136868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.140813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.141041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.604 [2024-10-14 16:52:51.141061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.604 [2024-10-14 16:52:51.145041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.604 [2024-10-14 16:52:51.145266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.145285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.149250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.149488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.149508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.153468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.153699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.153718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.157691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.157919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.157939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.162056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.162280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.162300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.166980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.167207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.167227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.171260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.171499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.171519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.175494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.175725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.175745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.179733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.179959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.183972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.184199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.184218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.188189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.188418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.188438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.192477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.192715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.192735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.197032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.197257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.197277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.201989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.202220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.202240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.207043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.207280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.207300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.212039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.212265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.212292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.216787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.217014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.217033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.221407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.221638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.221657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.226044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.226271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.226291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.230626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.230854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.230874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.605 [2024-10-14 16:52:51.234923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.605 [2024-10-14 16:52:51.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.605 [2024-10-14 16:52:51.235186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.239303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.239537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.239558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.243921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.244163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.244184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.248494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.248736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.248756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.252808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.253055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.253074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.257070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.257296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.257316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.261359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.261596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.261622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.265649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.265877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.265896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.269925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.270148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.270168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.274153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.274378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.274398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.278390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.278625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.278644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.282665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.282893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.282913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.286902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.287131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.287150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.291079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.291306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.291326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.295397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.295639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.295659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.299637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.299863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.299883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.303851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.304078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.304098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.308187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.308426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.308445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.312796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.313037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.313056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.317090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.317316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.317335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.321381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.321621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.321643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.325692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.325925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.325952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.329970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.330213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.330235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.866 [2024-10-14 16:52:51.334317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.866 [2024-10-14 16:52:51.334573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.866 [2024-10-14 16:52:51.334593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.338620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.338853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.338873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.342917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.343149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.343169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.347234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.347489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.347509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.351531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.351780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.351800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.355835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.356062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.356081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.360087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.360314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.360333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.364348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.364591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.364617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.368594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.368849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.368869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.372918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.373142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.373162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.377133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.377361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.377381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.381340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.381567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.381588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.385782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.386019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.386038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.390293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.390520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.390539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.394628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.394869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.394888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.398907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.399133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.399153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.403197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.403424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.403444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.407488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.407723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.407743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.411948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.412176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.412196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.416808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.417035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.417054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.422185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.422412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.422431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.427485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.427739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.432108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.432337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.432356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.436829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.437065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.437084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.441522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.441753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.441775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.446089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.446328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.446348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.450754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.450981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.451000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.455390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.455622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.455641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.460452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.460680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.867 [2024-10-14 16:52:51.460699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.867 [2024-10-14 16:52:51.465704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.867 [2024-10-14 16:52:51.465795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.465812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.470730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.470969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.470989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.475422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.475652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.475671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.480086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.480311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.480330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.484854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.485086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.485105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.489592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.489826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.489846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.494258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.494501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.494521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.868 [2024-10-14 16:52:51.498766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:46.868 [2024-10-14 16:52:51.498833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.868 [2024-10-14 16:52:51.498851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.503584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.503821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.503841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.508800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.509028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.509047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.514159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.514385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.514405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.521315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.521386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.521404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.527956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.528189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.528209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.534300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.534512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.534532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.540256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.540477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.540496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.547481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.547764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.553947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.554165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.554185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.128 6386.00 IOPS, 798.25 MiB/s [2024-10-14T14:52:51.762Z] [2024-10-14 16:52:51.560542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.560760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.560780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.565097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.565306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.565325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.569478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.569709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.569728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.573867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.574108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.578380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.578606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.578628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.128 [2024-10-14 16:52:51.582646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.128 [2024-10-14 16:52:51.582862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.128 [2024-10-14 16:52:51.582883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.586837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.587065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.587086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.591006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.591235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.591256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.595259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.595490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.595510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.599417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.599654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.599674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.603588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.603827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.603847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.607771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.608000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.608020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.611929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.612142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.612162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.616060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.616298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.616318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.620184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.620396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.620416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.624293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.624506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.624525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.628427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.628644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.628663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.633057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.633328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.633348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.638966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.639268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.644355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.644586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.644611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.649125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.649337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.649357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.654015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.654227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.654250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.658259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.658470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.658490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.662434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.662652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.662671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.666578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.666796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.666815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.670765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.670976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.670996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.674898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.675109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.675128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.678966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.679164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.679184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.683040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.683239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.683258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.687133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.687333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.687353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.691197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.691400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.691419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.695346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.695548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.695567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.699559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.699773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.699793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.704521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.704728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.704748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.129 [2024-10-14 16:52:51.708823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.129 [2024-10-14 16:52:51.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.129 [2024-10-14 16:52:51.709044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.712972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.713172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.713192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.717121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.717320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.717339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.721674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.721946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.721965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.727758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.727994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.728014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.733193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.733396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.733416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.738397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.738614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.738639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.743745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.743972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.743992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.749067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.749285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.749305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.754265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.754484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.754504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.130 [2024-10-14 16:52:51.759252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.130 [2024-10-14 16:52:51.759456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.130 [2024-10-14 16:52:51.759476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.764078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.764283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.764303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.768537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.768746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.768766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.772937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.773154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.773177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.777186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.777404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.777424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.781920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.782138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.782158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.786343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.786563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.786583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.790828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.791043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.791062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.795306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.795528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.795548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.799659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.799859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.799879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.804074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.419 [2024-10-14 16:52:51.804276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.419 [2024-10-14 16:52:51.804296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.419 [2024-10-14 16:52:51.808578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.808788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.808808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.813033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.813256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.813277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.817398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.817610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.817630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.821751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.821953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.821972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.826533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.826772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.826794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.832689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.832897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.832919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.837489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.837697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.837718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.842178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.842383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.842404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.846708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.846928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.846947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.851160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.851378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.851398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.855569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.855774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.855794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.859879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.860081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.860101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.865728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.866005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.866025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.872430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.872716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.872736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.879541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.879835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.879856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.886568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.886854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.886874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.893689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.893963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.893983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.901353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.901636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.901657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.908679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.908926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.908950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.916328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.916547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.916568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.923718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.924009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.924029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.930854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.931169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.931189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.937971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.938244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.938264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.945551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.945814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.945834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.952708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.953008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.953028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.959835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.960035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.960053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.964468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.964675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.964696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.969635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.969864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.969884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.420 [2024-10-14 16:52:51.974301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.420 [2024-10-14 16:52:51.974501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.420 [2024-10-14 16:52:51.974521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:51.978817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:51.979016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:51.979035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:51.983345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:51.983544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:51.983563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:51.987929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:51.988129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:51.988147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:51.993230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:51.993449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:51.993469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:51.998039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:51.998240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:51.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.002486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.002690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.002709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.007021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.007221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.007240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.011231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.011431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.011450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.015621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.015839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.015859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.021038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.021350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.021370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.027003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.027231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.027251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.421 [2024-10-14 16:52:52.032130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.421 [2024-10-14 16:52:52.032338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.421 [2024-10-14 16:52:52.032358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.036888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.037115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.037135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.041938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.042160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.046669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.046867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.046887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.051497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.051727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.051750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.056417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.056623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.056642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.061182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.061408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.065848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.066053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.066072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.070155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.070362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.070381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.075179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.075414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.075432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.081209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.081512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.081539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.086096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.086318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.086340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.090892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.091115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.091136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.095577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.095798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.095819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.100372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.100592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.100618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.104992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.105211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.105231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.109583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.109792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.109819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.114154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.114357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.114376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.722 [2024-10-14 16:52:52.118946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.722 [2024-10-14 16:52:52.119169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.722 [2024-10-14 16:52:52.119189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.124881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.125138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.125158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.130164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.130379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.130399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.134744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.134951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.134971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.139360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.139567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.139586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.143795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.144003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.144024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.148302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.148522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.148543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.152932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.153140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.153160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.157497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.157714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.157734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.162097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.162309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.162329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.167315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.167532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.167553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.172227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.172428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.172449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.176996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.177202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.177226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.182067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.182270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.182289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.187214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.187418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.187438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.192058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.192259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.192279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.196798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.196998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.197017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.201878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.202078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.202097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.207150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.207357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.207377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.212058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.212261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.212280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.217043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.217246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.217265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.222104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.222333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.222353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.227069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.227273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.227292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.231948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.232151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.232170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.236713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.236918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.236943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.242040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.242243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.242262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.247106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.247308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.247328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.251745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.251949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.256615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.256821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.723 [2024-10-14 16:52:52.256840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.723 [2024-10-14 16:52:52.261471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.723 [2024-10-14 16:52:52.261679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.261698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.266413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.266623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.266647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.271355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.271559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.271578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.275948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.276174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.276193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.281015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.281218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.281238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.286145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.286346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.286365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.290867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.291069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.291088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.295704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.295924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.295944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.300457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.300668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.300693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.304826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.305026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.305050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.309446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.309653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.309672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.314339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.314546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.314567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.319098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.319306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.319326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.323511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.323724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.323744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:47.724 [2024-10-14 16:52:52.328138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:47.724 [2024-10-14 16:52:52.328392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.724 [2024-10-14 16:52:52.328411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.333054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.333288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.333312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.338006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.338214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.338236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.342545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.342763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.342784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.347032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.347253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.347274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.351190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.351411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.351431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.355595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.355822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.355842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.045 [2024-10-14 16:52:52.360025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.045 [2024-10-14 16:52:52.360233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.045 [2024-10-14 16:52:52.360254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.364339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.364558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.364578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.368712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.368934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.372867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.373094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.373113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.377070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.377281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.377300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.381284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.381516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.385449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.385662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.385682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.389592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.389803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.389823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.393641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.393845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.393865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.397699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.397901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.397927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.402072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.402276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.402296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.406214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.406443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.410427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.410636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.410660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.415368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.415573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.415594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.420394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.420599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.420631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.424997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.425199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.425219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.429421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.429632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.429654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.433963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.434165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.434185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.439087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.439346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.439367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.445613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.445759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.445777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.452542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.452667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.452685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.460220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.460384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.460403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.467609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.467771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.467787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.475338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.475496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.475514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.482755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.482872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.482890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.490613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.490697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.490715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.498620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.498781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.498799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.505845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.505931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.505950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.513360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.513484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.046 [2024-10-14 16:52:52.521281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.046 [2024-10-14 16:52:52.521466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.046 [2024-10-14 16:52:52.521483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.047 [2024-10-14 16:52:52.528730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.047 [2024-10-14 16:52:52.528841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-10-14 16:52:52.528860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-10-14 16:52:52.535308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.047 [2024-10-14 16:52:52.535416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-10-14 16:52:52.535440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.047 [2024-10-14 16:52:52.539916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.047 [2024-10-14 16:52:52.539977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-10-14 16:52:52.539995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.047 [2024-10-14 16:52:52.545308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.047 [2024-10-14 16:52:52.545393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-10-14 16:52:52.545411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.047 [2024-10-14 16:52:52.550161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.047 [2024-10-14 16:52:52.550212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-10-14 16:52:52.550230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.047 [2024-10-14 16:52:52.554966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcfaa0) with pdu=0x2000166fef90 00:27:48.047 [2024-10-14 16:52:52.555048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:48.047 [2024-10-14 16:52:52.555066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:48.047 6281.50 IOPS, 785.19 MiB/s 00:27:48.047 Latency(us) 00:27:48.047 [2024-10-14T14:52:52.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.047 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:48.047 nvme0n1 : 2.00 6278.70 784.84 0.00 0.00 2543.96 1903.66 12108.56 00:27:48.047 [2024-10-14T14:52:52.681Z] =================================================================================================================== 00:27:48.047 [2024-10-14T14:52:52.681Z] Total : 6278.70 784.84 0.00 0.00 2543.96 1903.66 12108.56 00:27:48.047 { 00:27:48.047 "results": [ 00:27:48.047 { 00:27:48.047 "job": "nvme0n1", 00:27:48.047 "core_mask": "0x2", 00:27:48.047 "workload": "randwrite", 00:27:48.047 "status": "finished", 00:27:48.047 "queue_depth": 16, 00:27:48.047 "io_size": 131072, 00:27:48.047 "runtime": 2.00328, 00:27:48.047 "iops": 6278.702927199393, 00:27:48.047 "mibps": 784.8378658999242, 00:27:48.047 "io_failed": 0, 00:27:48.047 "io_timeout": 0, 00:27:48.047 "avg_latency_us": 2543.9615279891573, 00:27:48.047 "min_latency_us": 1903.664761904762, 00:27:48.047 "max_latency_us": 12108.55619047619 00:27:48.047 } 00:27:48.047 ], 00:27:48.047 "core_count": 1 00:27:48.047 } 00:27:48.047 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:48.047 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:48.047 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:48.047 | .driver_specific 00:27:48.047 | .nvme_error 00:27:48.047 | .status_code 00:27:48.047 | .command_transient_transport_error' 00:27:48.047 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 405 > 0 )) 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 688833 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 688833 ']' 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 688833 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 688833 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 688833' 00:27:48.354 killing process with pid 688833 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 688833 00:27:48.354 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.354 00:27:48.354 Latency(us) 00:27:48.354 [2024-10-14T14:52:52.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.354 [2024-10-14T14:52:52.988Z] =================================================================================================================== 00:27:48.354 [2024-10-14T14:52:52.988Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.354 16:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 688833 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 687015 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 687015 ']' 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 687015 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 687015 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 687015' 00:27:48.613 killing process with pid 687015 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 687015 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 687015 00:27:48.613 00:27:48.613 real 0m13.865s 00:27:48.613 user 0m26.488s 00:27:48.613 sys 0m4.509s 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.613 ************************************ 00:27:48.613 END TEST nvmf_digest_error 00:27:48.613 ************************************ 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:48.613 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.872 rmmod nvme_tcp 00:27:48.872 rmmod nvme_fabrics 00:27:48.872 rmmod nvme_keyring 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 687015 ']' 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 687015 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 687015 ']' 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 687015 00:27:48.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (687015) - No such process 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 687015 is not found' 00:27:48.872 Process with pid 687015 is not found 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.872 16:52:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.776 16:52:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:50.776 00:27:50.776 real 0m36.177s 00:27:50.776 user 0m54.914s 00:27:50.776 sys 0m13.641s 00:27:50.776 16:52:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.776 16:52:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:50.776 ************************************ 00:27:50.776 END TEST nvmf_digest 00:27:50.776 ************************************ 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.035 ************************************ 00:27:51.035 START TEST nvmf_bdevperf 00:27:51.035 ************************************ 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:51.035 * Looking for test storage... 00:27:51.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.035 --rc genhtml_branch_coverage=1 00:27:51.035 --rc genhtml_function_coverage=1 00:27:51.035 --rc genhtml_legend=1 00:27:51.035 --rc geninfo_all_blocks=1 00:27:51.035 --rc geninfo_unexecuted_blocks=1 00:27:51.035 00:27:51.035 ' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.035 --rc genhtml_branch_coverage=1 00:27:51.035 --rc genhtml_function_coverage=1 00:27:51.035 --rc genhtml_legend=1 00:27:51.035 --rc geninfo_all_blocks=1 00:27:51.035 --rc geninfo_unexecuted_blocks=1 00:27:51.035 00:27:51.035 ' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.035 --rc genhtml_branch_coverage=1 00:27:51.035 --rc genhtml_function_coverage=1 00:27:51.035 --rc genhtml_legend=1 00:27:51.035 --rc geninfo_all_blocks=1 00:27:51.035 --rc geninfo_unexecuted_blocks=1 00:27:51.035 00:27:51.035 ' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:51.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:51.035 --rc genhtml_branch_coverage=1 00:27:51.035 --rc genhtml_function_coverage=1 00:27:51.035 --rc genhtml_legend=1 00:27:51.035 --rc geninfo_all_blocks=1 00:27:51.035 --rc geninfo_unexecuted_blocks=1 00:27:51.035 00:27:51.035 ' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.035 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.036 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:51.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.295 16:52:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:57.860 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:57.860 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:57.860 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:57.861 Found net devices under 0000:86:00.0: cvl_0_0 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:57.861 Found net devices under 0000:86:00.1: cvl_0_1 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:57.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:27:57.861 00:27:57.861 --- 10.0.0.2 ping statistics --- 00:27:57.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.861 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:57.861 00:27:57.861 --- 10.0.0.1 ping statistics --- 00:27:57.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.861 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=692858 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 692858 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 692858 ']' 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:57.861 16:53:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:57.861 [2024-10-14 16:53:01.720539] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:57.861 [2024-10-14 16:53:01.720583] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.861 [2024-10-14 16:53:01.793891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:57.861 [2024-10-14 16:53:01.836897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.861 [2024-10-14 16:53:01.836936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.861 [2024-10-14 16:53:01.836943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.861 [2024-10-14 16:53:01.836949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.861 [2024-10-14 16:53:01.836954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.861 [2024-10-14 16:53:01.838368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.861 [2024-10-14 16:53:01.838473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.861 [2024-10-14 16:53:01.838473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.120 [2024-10-14 16:53:02.600294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.120 Malloc0 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:58.120 [2024-10-14 16:53:02.667752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:58.120 { 00:27:58.120 "params": { 00:27:58.120 "name": "Nvme$subsystem", 00:27:58.120 "trtype": "$TEST_TRANSPORT", 00:27:58.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.120 "adrfam": "ipv4", 00:27:58.120 "trsvcid": "$NVMF_PORT", 00:27:58.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.120 "hdgst": ${hdgst:-false}, 00:27:58.120 "ddgst": ${ddgst:-false} 00:27:58.120 }, 00:27:58.120 "method": "bdev_nvme_attach_controller" 00:27:58.120 } 00:27:58.120 EOF 00:27:58.120 )") 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:58.120 16:53:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:58.120 "params": { 00:27:58.120 "name": "Nvme1", 00:27:58.120 "trtype": "tcp", 00:27:58.120 "traddr": "10.0.0.2", 00:27:58.120 "adrfam": "ipv4", 00:27:58.120 "trsvcid": "4420", 00:27:58.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.120 "hdgst": false, 00:27:58.120 "ddgst": false 00:27:58.120 }, 00:27:58.120 "method": "bdev_nvme_attach_controller" 00:27:58.120 }' 00:27:58.120 [2024-10-14 16:53:02.717907] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:58.120 [2024-10-14 16:53:02.717948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693096 ] 00:27:58.379 [2024-10-14 16:53:02.784667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.379 [2024-10-14 16:53:02.825771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.379 Running I/O for 1 seconds... 00:27:59.753 11519.00 IOPS, 45.00 MiB/s 00:27:59.753 Latency(us) 00:27:59.753 [2024-10-14T14:53:04.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.753 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:59.753 Verification LBA range: start 0x0 length 0x4000 00:27:59.753 Nvme1n1 : 1.01 11560.12 45.16 0.00 0.00 11025.25 2137.72 14667.58 00:27:59.753 [2024-10-14T14:53:04.387Z] =================================================================================================================== 00:27:59.753 [2024-10-14T14:53:04.387Z] Total : 11560.12 45.16 0.00 0.00 11025.25 2137.72 14667.58 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=693336 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:59.753 { 00:27:59.753 "params": { 00:27:59.753 "name": "Nvme$subsystem", 00:27:59.753 "trtype": "$TEST_TRANSPORT", 00:27:59.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.753 "adrfam": "ipv4", 00:27:59.753 "trsvcid": "$NVMF_PORT", 00:27:59.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.753 "hdgst": ${hdgst:-false}, 00:27:59.753 "ddgst": ${ddgst:-false} 00:27:59.753 }, 00:27:59.753 "method": "bdev_nvme_attach_controller" 00:27:59.753 } 00:27:59.753 EOF 00:27:59.753 )") 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:59.753 16:53:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:59.753 "params": { 00:27:59.753 "name": "Nvme1", 00:27:59.753 "trtype": "tcp", 00:27:59.753 "traddr": "10.0.0.2", 00:27:59.753 "adrfam": "ipv4", 00:27:59.753 "trsvcid": "4420", 00:27:59.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:59.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:59.753 "hdgst": false, 00:27:59.753 "ddgst": false 00:27:59.753 }, 00:27:59.753 "method": "bdev_nvme_attach_controller" 00:27:59.753 }' 00:27:59.753 [2024-10-14 16:53:04.200799] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:27:59.753 [2024-10-14 16:53:04.200851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693336 ] 00:27:59.753 [2024-10-14 16:53:04.270371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.753 [2024-10-14 16:53:04.308252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.011 Running I/O for 15 seconds... 00:28:01.955 11565.00 IOPS, 45.18 MiB/s [2024-10-14T14:53:07.531Z] 11527.50 IOPS, 45.03 MiB/s [2024-10-14T14:53:07.531Z] 16:53:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 692858 00:28:02.897 16:53:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:02.897 [2024-10-14 16:53:07.169999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.897 [2024-10-14 16:53:07.170145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.897 [2024-10-14 16:53:07.170405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.897 [2024-10-14 16:53:07.170415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.898 [2024-10-14 16:53:07.170907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.170985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.170992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.898 [2024-10-14 16:53:07.171099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.898 [2024-10-14 16:53:07.171105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.899 [2024-10-14 16:53:07.171679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.899 [2024-10-14 16:53:07.171685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.171991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.171997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.900 [2024-10-14 16:53:07.172011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0fc20 is same with the state(6) to be set 00:28:02.900 [2024-10-14 16:53:07.172027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:02.900 [2024-10-14 16:53:07.172032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:02.900 [2024-10-14 16:53:07.172039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106840 len:8 PRP1 0x0 PRP2 0x0 00:28:02.900 [2024-10-14 16:53:07.172047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172089] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c0fc20 was disconnected and freed. reset controller. 00:28:02.900 [2024-10-14 16:53:07.172133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.900 [2024-10-14 16:53:07.172142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.900 [2024-10-14 16:53:07.172155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.900 [2024-10-14 16:53:07.172168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:02.900 [2024-10-14 16:53:07.172181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.900 [2024-10-14 16:53:07.172187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.900 [2024-10-14 16:53:07.174945] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.900 [2024-10-14 16:53:07.174973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.900 [2024-10-14 16:53:07.175541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-14 16:53:07.175557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.900 [2024-10-14 16:53:07.175565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.900 [2024-10-14 16:53:07.175744] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.900 [2024-10-14 16:53:07.175917] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.900 [2024-10-14 16:53:07.175926] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.900 [2024-10-14 16:53:07.175934] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.900 [2024-10-14 16:53:07.178670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.900 [2024-10-14 16:53:07.188196] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.900 [2024-10-14 16:53:07.188637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-14 16:53:07.188685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.900 [2024-10-14 16:53:07.188709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.900 [2024-10-14 16:53:07.189151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.900 [2024-10-14 16:53:07.189323] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.900 [2024-10-14 16:53:07.189331] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.900 [2024-10-14 16:53:07.189338] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.900 [2024-10-14 16:53:07.192051] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.900 [2024-10-14 16:53:07.201079] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.900 [2024-10-14 16:53:07.201495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-14 16:53:07.201510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.900 [2024-10-14 16:53:07.201518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.900 [2024-10-14 16:53:07.201689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.900 [2024-10-14 16:53:07.201857] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.900 [2024-10-14 16:53:07.201865] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.900 [2024-10-14 16:53:07.201872] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.204467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.213814] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.214172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.214188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.214195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.214365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.214533] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.214541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.214547] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.217191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.226649] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.227057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.227101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.227124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.227717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.228258] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.228266] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.228272] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.230866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.239589] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.240008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.240025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.240032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.240199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.240366] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.240374] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.240380] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.242986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.252333] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.252741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.252784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.252807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.253313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.253471] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.253479] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.253488] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.256100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.265038] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.265442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.265458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.265465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.265638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.265805] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.265813] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.265819] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.268415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.277774] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.278183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.278198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.278204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.278362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.278520] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.278527] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.278533] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.281155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.290676] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.290998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.291034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.291059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.291651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.292232] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.292269] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.292275] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.294871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.303383] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.303780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.303823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.303846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.304383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.304778] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.304796] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.304810] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.311031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.318156] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.318650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-14 16:53:07.318671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.901 [2024-10-14 16:53:07.318682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.901 [2024-10-14 16:53:07.318934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.901 [2024-10-14 16:53:07.319189] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.901 [2024-10-14 16:53:07.319200] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.901 [2024-10-14 16:53:07.319209] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.901 [2024-10-14 16:53:07.323256] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.901 [2024-10-14 16:53:07.331156] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.901 [2024-10-14 16:53:07.331554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.331570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.331577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.331751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.331918] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.331926] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.331932] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.334588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.343920] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.344358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.344400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.344423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.345023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.345283] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.345291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.345297] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.351491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.358818] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.359311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.359332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.359342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.359595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.359857] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.359868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.359877] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.363917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.371894] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.372302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.372318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.372326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.372496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.372674] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.372683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.372689] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.375418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.384632] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.384974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.384990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.384997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.385164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.385335] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.385343] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.385352] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.387955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.397443] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.397798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.397814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.397821] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.397988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.398154] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.398163] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.398169] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.400789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.410168] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.410558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.410573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.410580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.410765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.410931] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.410939] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.410945] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.413542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.422928] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.423334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.423350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.423358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.423525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.423719] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.423728] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.423735] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.426482] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.435912] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.436335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.436355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.436362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.436534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.436714] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.436723] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.436729] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.439465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.449015] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.449430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.449446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.449454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.449630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.449803] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.449812] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.449818] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.452552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.461979] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.902 [2024-10-14 16:53:07.462394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-14 16:53:07.462436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.902 [2024-10-14 16:53:07.462459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.902 [2024-10-14 16:53:07.463049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.902 [2024-10-14 16:53:07.463566] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.902 [2024-10-14 16:53:07.463575] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.902 [2024-10-14 16:53:07.463580] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.902 [2024-10-14 16:53:07.466305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.902 [2024-10-14 16:53:07.474966] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.903 [2024-10-14 16:53:07.475377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-14 16:53:07.475420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.903 [2024-10-14 16:53:07.475443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.903 [2024-10-14 16:53:07.476033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.903 [2024-10-14 16:53:07.476635] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.903 [2024-10-14 16:53:07.476657] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.903 [2024-10-14 16:53:07.476664] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.903 [2024-10-14 16:53:07.482785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.903 [2024-10-14 16:53:07.490049] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.903 [2024-10-14 16:53:07.490543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-14 16:53:07.490564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.903 [2024-10-14 16:53:07.490574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.903 [2024-10-14 16:53:07.490832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.903 [2024-10-14 16:53:07.491087] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.903 [2024-10-14 16:53:07.491098] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.903 [2024-10-14 16:53:07.491107] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.903 [2024-10-14 16:53:07.495155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.903 [2024-10-14 16:53:07.503028] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.903 [2024-10-14 16:53:07.503409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-14 16:53:07.503425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.903 [2024-10-14 16:53:07.503432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.903 [2024-10-14 16:53:07.503598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.903 [2024-10-14 16:53:07.503773] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.903 [2024-10-14 16:53:07.503781] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.903 [2024-10-14 16:53:07.503787] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.903 [2024-10-14 16:53:07.506450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:02.903 [2024-10-14 16:53:07.515758] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:02.903 [2024-10-14 16:53:07.516201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-14 16:53:07.516243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:02.903 [2024-10-14 16:53:07.516266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:02.903 [2024-10-14 16:53:07.516790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:02.903 [2024-10-14 16:53:07.516957] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:02.903 [2024-10-14 16:53:07.516965] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:02.903 [2024-10-14 16:53:07.516971] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:02.903 [2024-10-14 16:53:07.522678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.163 [2024-10-14 16:53:07.530710] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.163 [2024-10-14 16:53:07.531204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.163 [2024-10-14 16:53:07.531225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.163 [2024-10-14 16:53:07.531236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.163 [2024-10-14 16:53:07.531488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.163 [2024-10-14 16:53:07.531749] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.163 [2024-10-14 16:53:07.531762] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.163 [2024-10-14 16:53:07.531770] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.163 [2024-10-14 16:53:07.535801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.163 [2024-10-14 16:53:07.543644] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.163 [2024-10-14 16:53:07.543970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.163 [2024-10-14 16:53:07.543985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.163 [2024-10-14 16:53:07.543992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.163 [2024-10-14 16:53:07.544157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.544323] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.544331] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.544337] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.546998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.556352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.556764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.556780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.556788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.556954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.557120] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.557129] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.557134] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.559751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.569199] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.569613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.569629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.569639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.569807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.569977] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.569984] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.569990] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.572595] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.581978] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.582361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.582377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.582384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.582541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.582723] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.582732] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.582738] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.585326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 9882.33 IOPS, 38.60 MiB/s [2024-10-14T14:53:07.798Z] [2024-10-14 16:53:07.595859] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.596241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.596256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.596263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.596419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.596577] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.596585] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.596590] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.599197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.608583] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.608993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.609008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.609015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.609172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.609330] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.609341] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.609347] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.611944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.621411] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.621846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.621863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.621870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.622037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.622204] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.622212] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.622218] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.624812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.634197] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.634581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.634597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.634607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.634788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.634955] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.634962] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.634969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.637623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.646980] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.647341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.647357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.647364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.647522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.647703] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.647712] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.647718] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.650307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.659718] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.660105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.660120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.660126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.660283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.660440] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.660448] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.164 [2024-10-14 16:53:07.660453] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.164 [2024-10-14 16:53:07.663059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.164 [2024-10-14 16:53:07.672426] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.164 [2024-10-14 16:53:07.672827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.164 [2024-10-14 16:53:07.672843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.164 [2024-10-14 16:53:07.672850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.164 [2024-10-14 16:53:07.673016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.164 [2024-10-14 16:53:07.673182] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.164 [2024-10-14 16:53:07.673190] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.673196] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.675947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.685503] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.685846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.685862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.685870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.686040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.686211] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.686219] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.686226] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.688904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.698390] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.698811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.698827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.698834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.699015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.699181] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.699189] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.699195] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.701847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.711302] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.711719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.711736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.711743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.711921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.712088] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.712096] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.712102] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.714715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.724131] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.724527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.724570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.724593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.725077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.725245] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.725253] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.725259] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.727851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.736859] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.737266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.737281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.737289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.737455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.737626] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.737651] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.737660] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.740350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.749552] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.749937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.749952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.749959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.750116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.750273] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.750281] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.750286] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.752875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.762342] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.762656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.762672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.762679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.762836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.762993] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.763000] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.763006] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.765608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.775115] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.775529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.775545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.775552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.775722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.775888] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.775897] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.775902] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.778493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.165 [2024-10-14 16:53:07.787877] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.165 [2024-10-14 16:53:07.788278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.165 [2024-10-14 16:53:07.788294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.165 [2024-10-14 16:53:07.788301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.165 [2024-10-14 16:53:07.788459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.165 [2024-10-14 16:53:07.788623] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.165 [2024-10-14 16:53:07.788647] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.165 [2024-10-14 16:53:07.788653] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.165 [2024-10-14 16:53:07.791250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.433 [2024-10-14 16:53:07.800846] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.433 [2024-10-14 16:53:07.801261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.433 [2024-10-14 16:53:07.801277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.433 [2024-10-14 16:53:07.801285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.433 [2024-10-14 16:53:07.801453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.433 [2024-10-14 16:53:07.801627] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.433 [2024-10-14 16:53:07.801636] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.433 [2024-10-14 16:53:07.801642] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.433 [2024-10-14 16:53:07.804337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.433 [2024-10-14 16:53:07.813627] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.433 [2024-10-14 16:53:07.814060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.433 [2024-10-14 16:53:07.814104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.433 [2024-10-14 16:53:07.814126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.433 [2024-10-14 16:53:07.814534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.433 [2024-10-14 16:53:07.814708] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.433 [2024-10-14 16:53:07.814716] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.433 [2024-10-14 16:53:07.814722] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.433 [2024-10-14 16:53:07.817317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.433 [2024-10-14 16:53:07.826407] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.433 [2024-10-14 16:53:07.826793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.434 [2024-10-14 16:53:07.826810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.434 [2024-10-14 16:53:07.826818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.434 [2024-10-14 16:53:07.826988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.434 [2024-10-14 16:53:07.827155] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.434 [2024-10-14 16:53:07.827163] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.434 [2024-10-14 16:53:07.827169] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.434 [2024-10-14 16:53:07.829787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.434 [2024-10-14 16:53:07.839260] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.434 [2024-10-14 16:53:07.839679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.434 [2024-10-14 16:53:07.839696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.434 [2024-10-14 16:53:07.839703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.434 [2024-10-14 16:53:07.839870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.434 [2024-10-14 16:53:07.840035] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.434 [2024-10-14 16:53:07.840043] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.434 [2024-10-14 16:53:07.840049] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.434 [2024-10-14 16:53:07.842697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.434 [2024-10-14 16:53:07.852072] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.434 [2024-10-14 16:53:07.852456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.434 [2024-10-14 16:53:07.852471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.434 [2024-10-14 16:53:07.852478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.434 [2024-10-14 16:53:07.852658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.434 [2024-10-14 16:53:07.852824] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.434 [2024-10-14 16:53:07.852832] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.434 [2024-10-14 16:53:07.852838] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.434 [2024-10-14 16:53:07.855432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.434 [2024-10-14 16:53:07.864781] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.434 [2024-10-14 16:53:07.865191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.434 [2024-10-14 16:53:07.865207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.434 [2024-10-14 16:53:07.865214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.434 [2024-10-14 16:53:07.865380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.434 [2024-10-14 16:53:07.865547] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.434 [2024-10-14 16:53:07.865555] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.434 [2024-10-14 16:53:07.865564] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.434 [2024-10-14 16:53:07.868166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.434 [2024-10-14 16:53:07.877551] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.434 [2024-10-14 16:53:07.877894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.434 [2024-10-14 16:53:07.877909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.434 [2024-10-14 16:53:07.877917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.434 [2024-10-14 16:53:07.878084] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.434 [2024-10-14 16:53:07.878250] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.434 [2024-10-14 16:53:07.878258] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.434 [2024-10-14 16:53:07.878264] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.434 [2024-10-14 16:53:07.880878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.434 [2024-10-14 16:53:07.890359] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.434 [2024-10-14 16:53:07.890776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.434 [2024-10-14 16:53:07.890792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.434 [2024-10-14 16:53:07.890799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.434 [2024-10-14 16:53:07.890957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.435 [2024-10-14 16:53:07.891135] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.435 [2024-10-14 16:53:07.891143] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.435 [2024-10-14 16:53:07.891149] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.435 [2024-10-14 16:53:07.893809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.435 [2024-10-14 16:53:07.903189] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.435 [2024-10-14 16:53:07.903597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.435 [2024-10-14 16:53:07.903618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.435 [2024-10-14 16:53:07.903626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.435 [2024-10-14 16:53:07.903791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.435 [2024-10-14 16:53:07.903958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.435 [2024-10-14 16:53:07.903965] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.435 [2024-10-14 16:53:07.903971] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.435 [2024-10-14 16:53:07.906604] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.435 [2024-10-14 16:53:07.915971] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.435 [2024-10-14 16:53:07.916359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.435 [2024-10-14 16:53:07.916378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.435 [2024-10-14 16:53:07.916385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.435 [2024-10-14 16:53:07.916542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.435 [2024-10-14 16:53:07.916726] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.435 [2024-10-14 16:53:07.916734] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.435 [2024-10-14 16:53:07.916740] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.435 [2024-10-14 16:53:07.919416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.435 [2024-10-14 16:53:07.928769] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.435 [2024-10-14 16:53:07.929180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.435 [2024-10-14 16:53:07.929196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.435 [2024-10-14 16:53:07.929203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.435 [2024-10-14 16:53:07.929370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.435 [2024-10-14 16:53:07.929536] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.435 [2024-10-14 16:53:07.929544] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.435 [2024-10-14 16:53:07.929550] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.435 [2024-10-14 16:53:07.932345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.435 [2024-10-14 16:53:07.941819] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.435 [2024-10-14 16:53:07.942220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.435 [2024-10-14 16:53:07.942237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.435 [2024-10-14 16:53:07.942244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.436 [2024-10-14 16:53:07.942415] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.436 [2024-10-14 16:53:07.942588] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.436 [2024-10-14 16:53:07.942596] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.436 [2024-10-14 16:53:07.942609] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.436 [2024-10-14 16:53:07.945312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.436 [2024-10-14 16:53:07.954593] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.436 [2024-10-14 16:53:07.955007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.436 [2024-10-14 16:53:07.955022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.436 [2024-10-14 16:53:07.955029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.436 [2024-10-14 16:53:07.955196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.436 [2024-10-14 16:53:07.955366] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.436 [2024-10-14 16:53:07.955374] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.436 [2024-10-14 16:53:07.955380] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.436 [2024-10-14 16:53:07.958099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.436 [2024-10-14 16:53:07.967300] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.436 [2024-10-14 16:53:07.967701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.436 [2024-10-14 16:53:07.967744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.436 [2024-10-14 16:53:07.967767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.436 [2024-10-14 16:53:07.968346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.436 [2024-10-14 16:53:07.968945] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.436 [2024-10-14 16:53:07.968954] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.436 [2024-10-14 16:53:07.968961] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.436 [2024-10-14 16:53:07.971557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.438 [2024-10-14 16:53:07.980013] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.438 [2024-10-14 16:53:07.980405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.438 [2024-10-14 16:53:07.980447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.438 [2024-10-14 16:53:07.980471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.438 [2024-10-14 16:53:07.980943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.438 [2024-10-14 16:53:07.981111] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.439 [2024-10-14 16:53:07.981119] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.439 [2024-10-14 16:53:07.981125] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.439 [2024-10-14 16:53:07.983727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.439 [2024-10-14 16:53:07.992977] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.439 [2024-10-14 16:53:07.993401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.439 [2024-10-14 16:53:07.993444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.439 [2024-10-14 16:53:07.993468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.439 [2024-10-14 16:53:07.993970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.439 [2024-10-14 16:53:07.994138] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.439 [2024-10-14 16:53:07.994146] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.439 [2024-10-14 16:53:07.994152] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.439 [2024-10-14 16:53:07.996859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.439 [2024-10-14 16:53:08.005924] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.439 [2024-10-14 16:53:08.006285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.439 [2024-10-14 16:53:08.006301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.439 [2024-10-14 16:53:08.006308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.439 [2024-10-14 16:53:08.006476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.440 [2024-10-14 16:53:08.006649] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.440 [2024-10-14 16:53:08.006659] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.440 [2024-10-14 16:53:08.006665] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.440 [2024-10-14 16:53:08.009315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.440 [2024-10-14 16:53:08.018900] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.440 [2024-10-14 16:53:08.019167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.440 [2024-10-14 16:53:08.019184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.440 [2024-10-14 16:53:08.019192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.440 [2024-10-14 16:53:08.019360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.440 [2024-10-14 16:53:08.019526] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.440 [2024-10-14 16:53:08.019535] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.440 [2024-10-14 16:53:08.019541] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.440 [2024-10-14 16:53:08.022147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.440 [2024-10-14 16:53:08.031696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.440 [2024-10-14 16:53:08.032017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.440 [2024-10-14 16:53:08.032060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.441 [2024-10-14 16:53:08.032084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.441 [2024-10-14 16:53:08.032678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.441 [2024-10-14 16:53:08.033216] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.441 [2024-10-14 16:53:08.033233] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.441 [2024-10-14 16:53:08.033246] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.441 [2024-10-14 16:53:08.039080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.441 [2024-10-14 16:53:08.046240] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.441 [2024-10-14 16:53:08.046715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.441 [2024-10-14 16:53:08.046735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.441 [2024-10-14 16:53:08.046749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.441 [2024-10-14 16:53:08.046983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.441 [2024-10-14 16:53:08.047217] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.441 [2024-10-14 16:53:08.047228] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.441 [2024-10-14 16:53:08.047237] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.441 [2024-10-14 16:53:08.050989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.441 [2024-10-14 16:53:08.059184] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.441 [2024-10-14 16:53:08.059589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.441 [2024-10-14 16:53:08.059611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.442 [2024-10-14 16:53:08.059620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.442 [2024-10-14 16:53:08.059791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.442 [2024-10-14 16:53:08.059962] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.442 [2024-10-14 16:53:08.059971] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.442 [2024-10-14 16:53:08.059977] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.442 [2024-10-14 16:53:08.062718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.705 [2024-10-14 16:53:08.072136] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.705 [2024-10-14 16:53:08.072484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.705 [2024-10-14 16:53:08.072500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.705 [2024-10-14 16:53:08.072507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.705 [2024-10-14 16:53:08.072687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.705 [2024-10-14 16:53:08.072860] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.705 [2024-10-14 16:53:08.072868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.705 [2024-10-14 16:53:08.072874] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.705 [2024-10-14 16:53:08.075615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.705 [2024-10-14 16:53:08.085158] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.705 [2024-10-14 16:53:08.085578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.705 [2024-10-14 16:53:08.085594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.705 [2024-10-14 16:53:08.085608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.085780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.085951] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.085963] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.085969] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.088708] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.098222] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.098655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.098673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.098681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.098863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.099046] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.099055] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.099062] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.101969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.111461] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.111935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.111952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.111960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.112142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.112325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.112333] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.112339] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.115255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.124696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.125040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.125057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.125064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.125235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.125407] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.125415] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.125421] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.128157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.137679] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.138010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.138026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.138034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.138216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.138399] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.138408] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.138414] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.141280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.150737] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.151155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.151171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.151178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.151350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.151522] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.151530] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.151537] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.154278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.163842] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.164292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.164308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.164316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.164497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.164686] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.164695] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.164702] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.167616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.177017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.177446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.177461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.177468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.177670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.177853] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.177862] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.177868] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.180782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.190231] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.190654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.190671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.190678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.190861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.191045] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.191054] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.191061] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.194182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.203668] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.204122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.204140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.706 [2024-10-14 16:53:08.204148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.706 [2024-10-14 16:53:08.204342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.706 [2024-10-14 16:53:08.204538] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.706 [2024-10-14 16:53:08.204547] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.706 [2024-10-14 16:53:08.204554] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.706 [2024-10-14 16:53:08.207674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.706 [2024-10-14 16:53:08.216834] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.706 [2024-10-14 16:53:08.217277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.706 [2024-10-14 16:53:08.217293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.217301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.217483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.217674] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.217683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.217694] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.220611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.230032] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.230473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.230490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.230498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.230687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.230871] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.230880] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.230887] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.233811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.243400] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.243848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.243865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.243873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.244055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.244238] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.244247] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.244253] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.247166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.256360] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.256783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.256800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.256807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.256979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.257151] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.257159] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.257165] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.259908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.269430] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.269804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.269820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.269827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.269999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.270171] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.270180] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.270186] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.273006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.282611] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.282959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.282975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.282983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.283165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.283352] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.283361] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.283367] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.286286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.295854] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.296287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.296303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.296311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.296493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.296683] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.296693] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.296699] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.299608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.309147] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.309507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.309523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.309532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.309732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.309931] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.309940] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.309946] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.312986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.322159] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.322568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.322584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.322592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.322768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.322940] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.322949] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.322955] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.325699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.707 [2024-10-14 16:53:08.335392] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.707 [2024-10-14 16:53:08.335897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.707 [2024-10-14 16:53:08.335914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.707 [2024-10-14 16:53:08.335922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.707 [2024-10-14 16:53:08.336103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.707 [2024-10-14 16:53:08.336286] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.707 [2024-10-14 16:53:08.336294] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.707 [2024-10-14 16:53:08.336302] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.707 [2024-10-14 16:53:08.339220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.967 [2024-10-14 16:53:08.348739] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.967 [2024-10-14 16:53:08.349184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-10-14 16:53:08.349201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.967 [2024-10-14 16:53:08.349209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.967 [2024-10-14 16:53:08.349404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.967 [2024-10-14 16:53:08.349599] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.967 [2024-10-14 16:53:08.349617] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.967 [2024-10-14 16:53:08.349624] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.967 [2024-10-14 16:53:08.352626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.967 [2024-10-14 16:53:08.361750] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.967 [2024-10-14 16:53:08.362169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.967 [2024-10-14 16:53:08.362184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.362192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.362362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.362534] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.362542] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.362548] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.365384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.374895] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.375360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.375377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.375385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.375579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.375782] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.375791] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.375799] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.378869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.388052] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.388492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.388510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.388517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.388705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.388889] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.388898] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.388904] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.391819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.401267] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.401705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.401725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.401733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.401917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.402099] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.402108] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.402115] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.405120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.414292] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.414726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.414743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.414751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.414922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.415095] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.415103] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.415109] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.417848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.427444] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.427888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.427905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.427913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.428095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.428279] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.428288] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.428294] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.431206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.440694] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.441121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.441137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.441145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.441327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.441516] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.441525] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.441533] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.444324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.453694] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.454136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.454179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.454203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.454793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.455001] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.455009] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.455015] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.457752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.466641] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.467041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.467058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.467065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.467237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.467411] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.467419] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.467426] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.470135] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.479538] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.479822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.479838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.479846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.480012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.480177] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.968 [2024-10-14 16:53:08.480186] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.968 [2024-10-14 16:53:08.480192] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.968 [2024-10-14 16:53:08.482822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.968 [2024-10-14 16:53:08.492480] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.968 [2024-10-14 16:53:08.492728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.968 [2024-10-14 16:53:08.492744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.968 [2024-10-14 16:53:08.492751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.968 [2024-10-14 16:53:08.492917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.968 [2024-10-14 16:53:08.493083] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.493091] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.493097] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.495776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.505396] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.505760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.505776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.505784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.505950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.506117] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.506125] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.506131] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.508834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.518231] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.518623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.518639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.518646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.518829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.518996] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.519004] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.519010] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.521670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.531057] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.531412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.531428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.531438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.531611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.531779] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.531786] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.531792] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.534391] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.543811] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.544228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.544243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.544251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.544409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.544566] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.544574] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.544579] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.547194] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.556575] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.557016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.557031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.557038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.557204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.557371] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.557379] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.557384] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.559989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.569331] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.569713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.569729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.569736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.569893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.570050] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.570061] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.570066] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.572678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.582063] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.582503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.582545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.582568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.583161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.583364] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.583372] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.583378] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 [2024-10-14 16:53:08.585983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.969 [2024-10-14 16:53:08.594860] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.969 [2024-10-14 16:53:08.595211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.969 [2024-10-14 16:53:08.595226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:03.969 [2024-10-14 16:53:08.595234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:03.969 [2024-10-14 16:53:08.595400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:03.969 [2024-10-14 16:53:08.595566] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.969 [2024-10-14 16:53:08.595575] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.969 [2024-10-14 16:53:08.595580] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.969 7411.75 IOPS, 28.95 MiB/s [2024-10-14T14:53:08.603Z] [2024-10-14 16:53:08.599464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.229 [2024-10-14 16:53:08.607651] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.229 [2024-10-14 16:53:08.608072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.229 [2024-10-14 16:53:08.608088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.229 [2024-10-14 16:53:08.608095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.229 [2024-10-14 16:53:08.608262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.229 [2024-10-14 16:53:08.608428] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.229 [2024-10-14 16:53:08.608436] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.229 [2024-10-14 16:53:08.608443] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.229 [2024-10-14 16:53:08.611093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.229 [2024-10-14 16:53:08.620441] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.229 [2024-10-14 16:53:08.620871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.229 [2024-10-14 16:53:08.620888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.229 [2024-10-14 16:53:08.620895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.229 [2024-10-14 16:53:08.621061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.229 [2024-10-14 16:53:08.621227] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.621235] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.621241] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.623857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.633189] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.633605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.633621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.633628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.633786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.633943] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.633950] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.633956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.636533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.645927] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.646259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.646274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.646281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.646438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.646596] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.646609] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.646615] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.649217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.658715] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.659124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.659139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.659146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.659308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.659466] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.659474] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.659479] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.662093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.671501] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.671931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.671946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.671953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.672120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.672287] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.672294] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.672300] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.674912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.684244] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.684664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.684681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.684688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.684860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.685036] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.685045] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.685052] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.687830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.697278] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.697631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.697647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.697655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.697827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.697999] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.698008] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.698017] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.700715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.710196] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.710593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.710612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.710620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.710787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.710953] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.710961] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.710967] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.713673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.723079] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.723498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.723514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.723522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.723697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.723870] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.723878] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.723884] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.726545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.735938] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.736347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.736389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.230 [2024-10-14 16:53:08.736413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.230 [2024-10-14 16:53:08.736842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.230 [2024-10-14 16:53:08.737010] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.230 [2024-10-14 16:53:08.737018] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.230 [2024-10-14 16:53:08.737024] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.230 [2024-10-14 16:53:08.739687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.230 [2024-10-14 16:53:08.748682] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.230 [2024-10-14 16:53:08.749083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.230 [2024-10-14 16:53:08.749098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.749105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.749263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.749421] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.749428] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.749434] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.752044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.761432] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.761745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.761760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.761767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.761933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.762100] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.762108] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.762114] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.764733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.774242] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.774668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.774712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.774735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.775313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.775909] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.775935] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.775955] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.778571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.787065] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.787490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.787538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.787562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.787999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.788170] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.788179] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.788185] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.790782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.799874] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.800263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.800279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.800286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.800444] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.800607] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.800616] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.800622] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.803233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.812622] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.813055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.813086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.813109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.813653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.813821] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.813829] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.813835] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.816429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.825320] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.825744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.825789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.825812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.826390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.826925] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.826933] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.826939] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.829537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.838144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.838479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.838494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.838501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.838682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.838849] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.838857] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.838863] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.841504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.850842] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.851259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.851273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.851280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.231 [2024-10-14 16:53:08.851438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.231 [2024-10-14 16:53:08.851595] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.231 [2024-10-14 16:53:08.851608] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.231 [2024-10-14 16:53:08.851614] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.231 [2024-10-14 16:53:08.854215] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.231 [2024-10-14 16:53:08.863808] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.231 [2024-10-14 16:53:08.864120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.231 [2024-10-14 16:53:08.864135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.231 [2024-10-14 16:53:08.864142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.864309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.864476] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.864486] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.864492] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.867168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.876535] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.876946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.876965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.876973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.877139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.877306] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.877313] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.877319] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.879931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.889352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.889679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.889695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.889702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.889860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.890016] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.890024] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.890030] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.892677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.902115] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.902522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.902537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.902544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.902727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.902894] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.902902] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.902907] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.905502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.914846] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.915255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.915269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.915276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.915434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.915595] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.915608] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.915614] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.918216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.927685] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.928098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.928143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.928166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.928758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.929307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.929315] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.929321] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.931900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.940413] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.940752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.940768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.940776] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.940943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.941109] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.941117] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.941123] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.943861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.953479] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.953911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.953926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.953934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.954106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.954277] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.954285] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.954291] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.957028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.966450] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.966883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.966898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.966905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.967076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.967252] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.967260] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.967267] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.969999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.979367] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.492 [2024-10-14 16:53:08.979773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.492 [2024-10-14 16:53:08.979789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.492 [2024-10-14 16:53:08.979797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.492 [2024-10-14 16:53:08.979965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.492 [2024-10-14 16:53:08.980131] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.492 [2024-10-14 16:53:08.980139] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.492 [2024-10-14 16:53:08.980145] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.492 [2024-10-14 16:53:08.982764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.492 [2024-10-14 16:53:08.992197] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:08.992590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:08.992610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:08.992618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:08.992789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:08.992961] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:08.992969] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:08.992976] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:08.995642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.005027] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.005453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.005495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.005526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.006127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.006518] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.006527] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.006533] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.009129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.017848] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.018261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.018277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.018284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.018441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.018598] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.018612] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.018618] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.021294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.030675] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.031064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.031080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.031087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.031245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.031402] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.031410] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.031415] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.034025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.043418] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.043661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.043678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.043685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.043852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.044018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.044029] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.044035] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.046663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.056200] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.056611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.056642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.056650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.056816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.056982] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.056990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.056997] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.059632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.068988] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.069323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.069339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.069346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.069513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.069685] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.069693] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.069699] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.072299] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.081777] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.082172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.082187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.082195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.082362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.082532] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.082541] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.082547] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.085198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.094713] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.095150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.095166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.095173] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.493 [2024-10-14 16:53:09.095340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.493 [2024-10-14 16:53:09.095507] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.493 [2024-10-14 16:53:09.095516] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.493 [2024-10-14 16:53:09.095522] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.493 [2024-10-14 16:53:09.098127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.493 [2024-10-14 16:53:09.107462] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.493 [2024-10-14 16:53:09.107824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.493 [2024-10-14 16:53:09.107840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.493 [2024-10-14 16:53:09.107848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.494 [2024-10-14 16:53:09.108013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.494 [2024-10-14 16:53:09.108180] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.494 [2024-10-14 16:53:09.108188] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.494 [2024-10-14 16:53:09.108194] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.494 [2024-10-14 16:53:09.110816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.494 [2024-10-14 16:53:09.120261] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.494 [2024-10-14 16:53:09.120680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.494 [2024-10-14 16:53:09.120696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.494 [2024-10-14 16:53:09.120703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.494 [2024-10-14 16:53:09.120877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.494 [2024-10-14 16:53:09.121036] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.494 [2024-10-14 16:53:09.121044] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.494 [2024-10-14 16:53:09.121049] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.494 [2024-10-14 16:53:09.123753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.753 [2024-10-14 16:53:09.133154] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.753 [2024-10-14 16:53:09.133562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.753 [2024-10-14 16:53:09.133577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.753 [2024-10-14 16:53:09.133584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.753 [2024-10-14 16:53:09.133780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.753 [2024-10-14 16:53:09.133953] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.753 [2024-10-14 16:53:09.133961] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.133978] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.136577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.145990] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.146352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.146367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.146374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.146531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.146713] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.146722] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.146728] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.149324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.158807] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.159208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.159250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.159273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.159794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.159961] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.159969] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.159975] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.162568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.171597] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.172014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.172029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.172036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.172202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.172368] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.172376] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.172385] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.174984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.184319] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.184714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.184730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.184737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.184904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.185070] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.185079] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.185085] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.187751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.197086] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.197478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.197494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.197502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.197692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.197863] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.197871] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.197877] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.200649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.210058] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.210459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.210475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.210482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.210767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.210941] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.210950] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.210956] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.213656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.222946] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.223363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.223379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.223387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.223553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.223745] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.223753] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.223759] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.226457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.235660] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.236059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.236075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.236082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.236248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.236415] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.236422] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.236428] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.239027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.248456] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.248866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.248908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.248931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.249509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.250018] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.250027] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.250032] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.252629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.261212] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.261595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.261614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.261621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.261778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.754 [2024-10-14 16:53:09.261938] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.754 [2024-10-14 16:53:09.261946] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.754 [2024-10-14 16:53:09.261951] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.754 [2024-10-14 16:53:09.264520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.754 [2024-10-14 16:53:09.274031] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.754 [2024-10-14 16:53:09.274451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.754 [2024-10-14 16:53:09.274467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.754 [2024-10-14 16:53:09.274475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.754 [2024-10-14 16:53:09.274647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.274813] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.274821] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.274827] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.277421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.286755] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.287148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.287164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.287170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.287328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.287486] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.287493] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.287499] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.290107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.299538] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.299921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.299937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.299944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.300111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.300278] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.300286] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.300292] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.302902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.312273] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.312684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.312700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.312707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.312874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.313040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.313047] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.313053] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.315676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.325010] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.325399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.325414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.325421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.325579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.325764] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.325773] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.325779] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.328374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.337708] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.338092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.338108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.338115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.338281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.338448] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.338456] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.338462] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.341147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.350463] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.350890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.350929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.350961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.351539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.352137] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.352165] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.352185] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.354807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.363232] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.363669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.363712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.363735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.364258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.364425] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.364433] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.364439] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.367053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:04.755 [2024-10-14 16:53:09.376011] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:04.755 [2024-10-14 16:53:09.376403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.755 [2024-10-14 16:53:09.376418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:04.755 [2024-10-14 16:53:09.376425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:04.755 [2024-10-14 16:53:09.376581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:04.755 [2024-10-14 16:53:09.376768] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:04.755 [2024-10-14 16:53:09.376776] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:04.755 [2024-10-14 16:53:09.376782] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:04.755 [2024-10-14 16:53:09.379373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.389020] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.389413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.389428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.389436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.389614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.389788] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.389797] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.389803] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.392429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.401816] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.402125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.402140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.402147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.402305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.402461] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.402469] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.402474] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.405082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.414598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.415005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.415021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.415028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.415195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.415361] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.415369] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.415375] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.417982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.427385] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.427773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.427789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.427796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.427954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.428111] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.428119] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.428124] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.430729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.440433] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.440846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.440862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.440870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.441042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.441216] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.441225] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.441231] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.443964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.453465] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.453865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.453881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.453889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.454060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.454231] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.454240] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.454246] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.456984] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.466507] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.466854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.466870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.466877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.467050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.467221] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.467229] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.467235] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.469972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.479605] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.479953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.479969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.479980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.480152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.480325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.480334] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.480343] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.483062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.492569] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.492977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.492994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.493001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.493173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.493345] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.015 [2024-10-14 16:53:09.493353] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.015 [2024-10-14 16:53:09.493359] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.015 [2024-10-14 16:53:09.496115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.015 [2024-10-14 16:53:09.505530] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.015 [2024-10-14 16:53:09.505825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.015 [2024-10-14 16:53:09.505841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.015 [2024-10-14 16:53:09.505849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.015 [2024-10-14 16:53:09.506020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.015 [2024-10-14 16:53:09.506191] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.506200] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.506206] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.508851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.518367] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.518761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.518777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.518784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.518951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.519120] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.519132] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.519138] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.521824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.531218] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.531788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.531806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.531813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.531981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.532149] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.532157] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.532162] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.534823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.544233] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.544699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.544743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.544767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.545346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.545860] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.545868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.545874] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.548464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.557012] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.557431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.557446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.557454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.557626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.557793] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.557801] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.557807] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.560404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.569946] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.570298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.570341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.570364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.570854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.571022] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.571030] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.571036] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.576588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.585074] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.585611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.585656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.585679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.586257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.586821] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.586833] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.586842] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.590900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.597990] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.598394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.598409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.598417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.598583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.598761] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.598770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.598776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 5929.40 IOPS, 23.16 MiB/s [2024-10-14T14:53:09.650Z] [2024-10-14 16:53:09.602618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.610880] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.611169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.611185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.611192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.611361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.611529] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.611537] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.611543] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.614148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.623702] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.623999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.624016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.624023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.624188] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.624356] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.624364] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.624369] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.626986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.636577] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.636961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.637004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.637027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.016 [2024-10-14 16:53:09.637618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.016 [2024-10-14 16:53:09.637900] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.016 [2024-10-14 16:53:09.637909] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.016 [2024-10-14 16:53:09.637915] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.016 [2024-10-14 16:53:09.640589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.016 [2024-10-14 16:53:09.649524] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.016 [2024-10-14 16:53:09.649965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.016 [2024-10-14 16:53:09.649982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.016 [2024-10-14 16:53:09.649989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.276 [2024-10-14 16:53:09.650160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.276 [2024-10-14 16:53:09.650334] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.276 [2024-10-14 16:53:09.650343] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.276 [2024-10-14 16:53:09.650352] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.276 [2024-10-14 16:53:09.653025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.276 [2024-10-14 16:53:09.662462] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.276 [2024-10-14 16:53:09.662839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.276 [2024-10-14 16:53:09.662866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.276 [2024-10-14 16:53:09.662874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.276 [2024-10-14 16:53:09.663041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.276 [2024-10-14 16:53:09.663212] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.276 [2024-10-14 16:53:09.663221] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.276 [2024-10-14 16:53:09.663227] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.276 [2024-10-14 16:53:09.665893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.276 [2024-10-14 16:53:09.675367] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.276 [2024-10-14 16:53:09.675721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.276 [2024-10-14 16:53:09.675738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.276 [2024-10-14 16:53:09.675745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.675914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.676083] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.676091] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.676096] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.678758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.688259] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.688713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.688730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.688737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.688903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.689075] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.689083] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.689089] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.691717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.701166] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.701617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.701661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.701685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.702190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.702357] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.702365] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.702371] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.705034] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.714197] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.714622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.714639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.714646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.714818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.714993] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.715002] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.715008] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.717769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.727127] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.727529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.727545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.727552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.727724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.727891] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.727900] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.727906] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.730561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.740080] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.740488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.740504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.740511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.740684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.740860] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.740868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.740874] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.743533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.753007] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.753431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.753447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.753454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.753626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.753794] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.753802] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.753808] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.756444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.765883] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.766241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.766257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.766265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.766432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.766599] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.766615] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.766621] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.769300] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.778770] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.779190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.779205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.779212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.779378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.779545] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.779554] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.779560] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.782376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.791587] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.791938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.791955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.791962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.792130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.792298] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.792307] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.792313] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.795063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.804413] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.804700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.804717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.277 [2024-10-14 16:53:09.804724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.277 [2024-10-14 16:53:09.804891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.277 [2024-10-14 16:53:09.805058] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.277 [2024-10-14 16:53:09.805067] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.277 [2024-10-14 16:53:09.805073] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.277 [2024-10-14 16:53:09.807706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.277 [2024-10-14 16:53:09.817290] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.277 [2024-10-14 16:53:09.817657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.277 [2024-10-14 16:53:09.817673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.817680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.817854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.818012] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.818020] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.818026] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.820670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.830132] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.830567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.830615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.830649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.831215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.831383] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.831391] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.831396] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.834033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.843138] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.843564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.843616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.843641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.844191] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.844358] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.844366] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.844373] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.850497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.858247] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.858723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.858745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.858756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.859010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.859264] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.859276] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.859285] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.863333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.871220] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.871628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.871672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.871695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.872273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.872705] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.872714] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.872720] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.875371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.884026] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.884422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.884437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.884444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.884616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.884784] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.884792] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.884798] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.887389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.897017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.897428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.897472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.897496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.898087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.898619] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.898628] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.898634] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.278 [2024-10-14 16:53:09.901281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.278 [2024-10-14 16:53:09.909959] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.278 [2024-10-14 16:53:09.910372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.278 [2024-10-14 16:53:09.910388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.278 [2024-10-14 16:53:09.910395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.278 [2024-10-14 16:53:09.910561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.278 [2024-10-14 16:53:09.910754] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.278 [2024-10-14 16:53:09.910764] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.278 [2024-10-14 16:53:09.910770] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.538 [2024-10-14 16:53:09.913502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.538 [2024-10-14 16:53:09.922787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.538 [2024-10-14 16:53:09.923198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-10-14 16:53:09.923215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.538 [2024-10-14 16:53:09.923222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.538 [2024-10-14 16:53:09.923388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.538 [2024-10-14 16:53:09.923555] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.538 [2024-10-14 16:53:09.923563] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.538 [2024-10-14 16:53:09.923570] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.538 [2024-10-14 16:53:09.926167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.538 [2024-10-14 16:53:09.935532] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.538 [2024-10-14 16:53:09.935936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-10-14 16:53:09.935952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.538 [2024-10-14 16:53:09.935959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.538 [2024-10-14 16:53:09.936126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.538 [2024-10-14 16:53:09.936297] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.538 [2024-10-14 16:53:09.936305] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.538 [2024-10-14 16:53:09.936311] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.538 [2024-10-14 16:53:09.938919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.538 [2024-10-14 16:53:09.948291] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.538 [2024-10-14 16:53:09.948674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-10-14 16:53:09.948690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.538 [2024-10-14 16:53:09.948697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.538 [2024-10-14 16:53:09.948854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.538 [2024-10-14 16:53:09.949010] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.538 [2024-10-14 16:53:09.949018] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.538 [2024-10-14 16:53:09.949024] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.538 [2024-10-14 16:53:09.951629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.538 [2024-10-14 16:53:09.960998] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.538 [2024-10-14 16:53:09.961415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.538 [2024-10-14 16:53:09.961431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.538 [2024-10-14 16:53:09.961441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:09.961614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:09.961800] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:09.961809] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:09.961815] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:09.964570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:09.973947] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:09.974349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:09.974364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:09.974372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:09.974543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:09.974722] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:09.974731] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:09.974737] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:09.977411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:09.986912] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:09.987313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:09.987329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:09.987336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:09.987503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:09.987693] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:09.987702] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:09.987708] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:09.990408] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:09.999729] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.000145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.000161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.000168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.000335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.000502] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.000514] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.000520] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.003314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.012702] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.013063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.013081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.013090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.013261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.013433] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.013442] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.013448] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.016189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.025727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.026089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.026105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.026113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.026284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.026460] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.026468] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.026474] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.030539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.038822] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.039225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.039242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.039250] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.039417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.039586] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.039594] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.039607] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.042354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.051848] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.052265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.052281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.052289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.052461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.052640] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.052650] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.052656] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.055336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.065087] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.065502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.065518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.065526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.065705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.065876] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.065885] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.065891] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.068622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.078110] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.078522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.078538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.078546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.078723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.078894] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.078903] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.078909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.081648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.091114] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.091546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.539 [2024-10-14 16:53:10.091590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.539 [2024-10-14 16:53:10.091627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.539 [2024-10-14 16:53:10.092190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.539 [2024-10-14 16:53:10.092358] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.539 [2024-10-14 16:53:10.092367] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.539 [2024-10-14 16:53:10.092373] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.539 [2024-10-14 16:53:10.095138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.539 [2024-10-14 16:53:10.104081] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.539 [2024-10-14 16:53:10.104482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-10-14 16:53:10.104498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.540 [2024-10-14 16:53:10.104506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.540 [2024-10-14 16:53:10.104683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.540 [2024-10-14 16:53:10.104857] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.540 [2024-10-14 16:53:10.104868] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.540 [2024-10-14 16:53:10.104876] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.540 [2024-10-14 16:53:10.107570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.540 [2024-10-14 16:53:10.117037] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.540 [2024-10-14 16:53:10.117370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-10-14 16:53:10.117411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.540 [2024-10-14 16:53:10.117434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.540 [2024-10-14 16:53:10.117996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.540 [2024-10-14 16:53:10.118287] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.540 [2024-10-14 16:53:10.118304] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.540 [2024-10-14 16:53:10.118317] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.540 [2024-10-14 16:53:10.124552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.540 [2024-10-14 16:53:10.131800] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.540 [2024-10-14 16:53:10.132298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-10-14 16:53:10.132340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.540 [2024-10-14 16:53:10.132363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.540 [2024-10-14 16:53:10.132932] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.540 [2024-10-14 16:53:10.133186] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.540 [2024-10-14 16:53:10.133197] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.540 [2024-10-14 16:53:10.133210] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.540 [2024-10-14 16:53:10.137257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.540 [2024-10-14 16:53:10.144798] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.540 [2024-10-14 16:53:10.145217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-10-14 16:53:10.145233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.540 [2024-10-14 16:53:10.145241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.540 [2024-10-14 16:53:10.145412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.540 [2024-10-14 16:53:10.145586] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.540 [2024-10-14 16:53:10.145595] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.540 [2024-10-14 16:53:10.145610] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.540 [2024-10-14 16:53:10.148283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.540 [2024-10-14 16:53:10.157804] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.540 [2024-10-14 16:53:10.158159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-10-14 16:53:10.158174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.540 [2024-10-14 16:53:10.158182] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.540 [2024-10-14 16:53:10.158348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.540 [2024-10-14 16:53:10.158516] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.540 [2024-10-14 16:53:10.158524] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.540 [2024-10-14 16:53:10.158530] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.540 [2024-10-14 16:53:10.161217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 692858 Killed "${NVMF_APP[@]}" "$@" 00:28:05.540 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:05.540 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:05.540 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:05.540 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:05.540 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.540 [2024-10-14 16:53:10.170900] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.540 [2024-10-14 16:53:10.171295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.540 [2024-10-14 16:53:10.171311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.540 [2024-10-14 16:53:10.171318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.540 [2024-10-14 16:53:10.171490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.540 [2024-10-14 16:53:10.171671] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.540 [2024-10-14 16:53:10.171683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.540 [2024-10-14 16:53:10.171689] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=694264 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 694264 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 694264 ']' 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.800 [2024-10-14 16:53:10.174426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.800 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.800 [2024-10-14 16:53:10.183974] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.184379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.184394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.184401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.184573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.184751] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.184759] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.184766] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.187502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.197056] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.197440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.197457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.197465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.197644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.197816] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.197824] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.197830] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.200566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.210126] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.210511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.210527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.210535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.210713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.210885] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.210893] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.210899] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.213646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.223096] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:28:05.800 [2024-10-14 16:53:10.223136] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.800 [2024-10-14 16:53:10.223184] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.223562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.223578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.223586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.223764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.223936] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.223943] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.223950] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.226688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.236180] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.236565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.236582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.236590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.236768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.236940] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.236948] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.236955] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.239699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.249235] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.249633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.249654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.249661] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.249834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.250005] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.250013] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.250019] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.252758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.262289] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.262680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.800 [2024-10-14 16:53:10.262697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.800 [2024-10-14 16:53:10.262704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.800 [2024-10-14 16:53:10.262876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.800 [2024-10-14 16:53:10.263048] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.800 [2024-10-14 16:53:10.263056] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.800 [2024-10-14 16:53:10.263062] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.800 [2024-10-14 16:53:10.265805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.800 [2024-10-14 16:53:10.275324] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.800 [2024-10-14 16:53:10.275659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.275675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.275683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.275855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.276028] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.276036] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.276042] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.278782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.288310] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.288707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.288724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.288732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.288903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.289078] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.289086] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.289094] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.291834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.296845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:05.801 [2024-10-14 16:53:10.301387] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.301800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.301817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.301824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.301997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.302168] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.302176] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.302182] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.304931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.314458] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.314910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.314927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.314934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.315107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.315283] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.315291] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.315298] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.318007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.327544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.327953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.327970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.327978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.328149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.328322] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.328331] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.328341] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.331083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.340336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.801 [2024-10-14 16:53:10.340361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.801 [2024-10-14 16:53:10.340368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.801 [2024-10-14 16:53:10.340374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.801 [2024-10-14 16:53:10.340379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.801 [2024-10-14 16:53:10.340507] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.340922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.340938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.340946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.341118] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.341291] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.341299] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.341305] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.341745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.801 [2024-10-14 16:53:10.341850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.801 [2024-10-14 16:53:10.341851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.801 [2024-10-14 16:53:10.344044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.353572] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.353996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.354015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.354023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.354197] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.354370] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.354379] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.354386] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.357129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.366536] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.366951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.366970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.366978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.367161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.367335] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.367343] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.367350] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.370092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.379633] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.380056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.380074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.380082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.380254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.380427] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.380435] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.380441] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.383178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.392717] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.393079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.393097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.801 [2024-10-14 16:53:10.393105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.801 [2024-10-14 16:53:10.393277] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.801 [2024-10-14 16:53:10.393449] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.801 [2024-10-14 16:53:10.393457] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.801 [2024-10-14 16:53:10.393464] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.801 [2024-10-14 16:53:10.396199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.801 [2024-10-14 16:53:10.405730] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.801 [2024-10-14 16:53:10.406146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.801 [2024-10-14 16:53:10.406161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.802 [2024-10-14 16:53:10.406169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.802 [2024-10-14 16:53:10.406342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.802 [2024-10-14 16:53:10.406515] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.802 [2024-10-14 16:53:10.406523] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.802 [2024-10-14 16:53:10.406534] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.802 [2024-10-14 16:53:10.409271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.802 [2024-10-14 16:53:10.418788] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.802 [2024-10-14 16:53:10.419186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-14 16:53:10.419202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.802 [2024-10-14 16:53:10.419209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.802 [2024-10-14 16:53:10.419380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.802 [2024-10-14 16:53:10.419552] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.802 [2024-10-14 16:53:10.419560] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.802 [2024-10-14 16:53:10.419567] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:05.802 [2024-10-14 16:53:10.422308] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.802 [2024-10-14 16:53:10.431868] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:05.802 [2024-10-14 16:53:10.432271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:05.802 [2024-10-14 16:53:10.432288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:05.802 [2024-10-14 16:53:10.432295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:05.802 [2024-10-14 16:53:10.432466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:05.802 [2024-10-14 16:53:10.432645] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:05.802 [2024-10-14 16:53:10.432654] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:05.802 [2024-10-14 16:53:10.432660] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.061 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:06.061 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:06.061 [2024-10-14 16:53:10.435394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.061 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:06.061 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.061 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.061 [2024-10-14 16:53:10.444922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.061 [2024-10-14 16:53:10.445195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.061 [2024-10-14 16:53:10.445211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.061 [2024-10-14 16:53:10.445219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.061 [2024-10-14 16:53:10.445391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.061 [2024-10-14 16:53:10.445562] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.061 [2024-10-14 16:53:10.445571] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.061 [2024-10-14 16:53:10.445582] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 [2024-10-14 16:53:10.448319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 [2024-10-14 16:53:10.457996] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 [2024-10-14 16:53:10.458270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.458286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.458294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.458465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.458641] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.062 [2024-10-14 16:53:10.458651] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.458657] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 [2024-10-14 16:53:10.461384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 [2024-10-14 16:53:10.471068] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 [2024-10-14 16:53:10.471337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.471353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.471361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.471531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.471706] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.062 [2024-10-14 16:53:10.471715] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.471721] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 [2024-10-14 16:53:10.474449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 [2024-10-14 16:53:10.476860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 [2024-10-14 16:53:10.484017] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 [2024-10-14 16:53:10.484361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.484376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.484384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.484559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.484736] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.062 [2024-10-14 16:53:10.484745] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.484751] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 [2024-10-14 16:53:10.487480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 [2024-10-14 16:53:10.497027] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 [2024-10-14 16:53:10.497360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.497377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.497385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.497557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.497735] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.062 [2024-10-14 16:53:10.497744] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.497750] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 [2024-10-14 16:53:10.500478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 [2024-10-14 16:53:10.510008] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 [2024-10-14 16:53:10.510364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.510381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.510389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.510561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.510739] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.062 [2024-10-14 16:53:10.510748] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.510755] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 [2024-10-14 16:53:10.513493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 Malloc0 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 [2024-10-14 16:53:10.523029] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 [2024-10-14 16:53:10.523454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.523470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.523478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.523659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.523832] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:06.062 [2024-10-14 16:53:10.523841] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.523847] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 [2024-10-14 16:53:10.526579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 [2024-10-14 16:53:10.536113] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.062 [2024-10-14 16:53:10.536449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.062 [2024-10-14 16:53:10.536465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e75c0 with addr=10.0.0.2, port=4420 00:28:06.062 [2024-10-14 16:53:10.536473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e75c0 is same with the state(6) to be set 00:28:06.062 [2024-10-14 16:53:10.536649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e75c0 (9): Bad file descriptor 00:28:06.062 [2024-10-14 16:53:10.536821] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.062 state 00:28:06.062 [2024-10-14 16:53:10.536835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:06.062 [2024-10-14 16:53:10.536841] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:06.062 [2024-10-14 16:53:10.539572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.062 [2024-10-14 16:53:10.539899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.062 16:53:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 693336 00:28:06.063 [2024-10-14 16:53:10.549093] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.063 4941.17 IOPS, 19.30 MiB/s [2024-10-14T14:53:10.697Z] [2024-10-14 16:53:10.623203] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:08.372 5848.29 IOPS, 22.84 MiB/s [2024-10-14T14:53:13.943Z] 6543.00 IOPS, 25.56 MiB/s [2024-10-14T14:53:14.876Z] 7092.22 IOPS, 27.70 MiB/s [2024-10-14T14:53:15.811Z] 7529.50 IOPS, 29.41 MiB/s [2024-10-14T14:53:16.748Z] 7889.00 IOPS, 30.82 MiB/s [2024-10-14T14:53:17.683Z] 8188.75 IOPS, 31.99 MiB/s [2024-10-14T14:53:19.060Z] 8444.38 IOPS, 32.99 MiB/s [2024-10-14T14:53:19.625Z] 8662.79 IOPS, 33.84 MiB/s [2024-10-14T14:53:19.884Z] 8855.13 IOPS, 34.59 MiB/s 00:28:15.250 Latency(us) 00:28:15.250 [2024-10-14T14:53:19.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:15.250 Verification LBA range: start 0x0 length 0x4000 00:28:15.250 Nvme1n1 : 15.01 8857.59 34.60 11041.21 0.00 6412.76 425.20 14730.00 00:28:15.250 [2024-10-14T14:53:19.884Z] =================================================================================================================== 00:28:15.250 [2024-10-14T14:53:19.884Z] Total : 8857.59 34.60 11041.21 0.00 6412.76 425.20 14730.00 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:15.250 rmmod nvme_tcp 00:28:15.250 rmmod nvme_fabrics 00:28:15.250 rmmod nvme_keyring 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 694264 ']' 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 694264 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 694264 ']' 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 694264 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.250 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694264 00:28:15.509 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.509 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.509 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694264' 00:28:15.509 killing process with pid 694264 00:28:15.509 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 694264 00:28:15.509 16:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 694264 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.509 16:53:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.044 00:28:18.044 real 0m26.704s 00:28:18.044 user 1m2.656s 00:28:18.044 sys 0m6.755s 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:18.044 ************************************ 00:28:18.044 END TEST nvmf_bdevperf 00:28:18.044 ************************************ 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:18.044 16:53:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.044 ************************************ 00:28:18.044 START TEST nvmf_target_disconnect 00:28:18.044 ************************************ 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:18.045 * Looking for test storage... 00:28:18.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.045 --rc genhtml_branch_coverage=1 00:28:18.045 --rc genhtml_function_coverage=1 00:28:18.045 --rc genhtml_legend=1 00:28:18.045 --rc geninfo_all_blocks=1 00:28:18.045 --rc geninfo_unexecuted_blocks=1 00:28:18.045 00:28:18.045 ' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.045 --rc genhtml_branch_coverage=1 00:28:18.045 --rc genhtml_function_coverage=1 00:28:18.045 --rc genhtml_legend=1 00:28:18.045 --rc geninfo_all_blocks=1 00:28:18.045 --rc geninfo_unexecuted_blocks=1 00:28:18.045 00:28:18.045 ' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.045 --rc genhtml_branch_coverage=1 00:28:18.045 --rc genhtml_function_coverage=1 00:28:18.045 --rc genhtml_legend=1 00:28:18.045 --rc geninfo_all_blocks=1 00:28:18.045 --rc geninfo_unexecuted_blocks=1 00:28:18.045 00:28:18.045 ' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.045 --rc genhtml_branch_coverage=1 00:28:18.045 --rc genhtml_function_coverage=1 00:28:18.045 --rc genhtml_legend=1 00:28:18.045 --rc geninfo_all_blocks=1 00:28:18.045 --rc geninfo_unexecuted_blocks=1 00:28:18.045 00:28:18.045 ' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.045 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.046 16:53:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:24.625 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.625 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:24.625 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:24.626 Found net devices under 0000:86:00.0: cvl_0_0 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:24.626 Found net devices under 0000:86:00.1: cvl_0_1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:28:24.626 00:28:24.626 --- 10.0.0.2 ping statistics --- 00:28:24.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.626 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:24.626 00:28:24.626 --- 10.0.0.1 ping statistics --- 00:28:24.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.626 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:24.626 ************************************ 00:28:24.626 START TEST nvmf_target_disconnect_tc1 00:28:24.626 ************************************ 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:24.626 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.626 [2024-10-14 16:53:28.494037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.626 [2024-10-14 16:53:28.494083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b83b70 with addr=10.0.0.2, port=4420 00:28:24.626 [2024-10-14 16:53:28.494119] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:24.626 [2024-10-14 16:53:28.494129] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:24.626 [2024-10-14 16:53:28.494136] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:24.626 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:24.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:24.627 Initializing NVMe Controllers 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:24.627 00:28:24.627 real 0m0.116s 00:28:24.627 user 0m0.047s 00:28:24.627 sys 0m0.068s 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 ************************************ 00:28:24.627 END TEST nvmf_target_disconnect_tc1 00:28:24.627 ************************************ 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 ************************************ 00:28:24.627 START TEST nvmf_target_disconnect_tc2 00:28:24.627 ************************************ 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=699384 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 699384 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 699384 ']' 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 [2024-10-14 16:53:28.635964] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:28:24.627 [2024-10-14 16:53:28.636006] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.627 [2024-10-14 16:53:28.707267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.627 [2024-10-14 16:53:28.750005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.627 [2024-10-14 16:53:28.750040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.627 [2024-10-14 16:53:28.750047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.627 [2024-10-14 16:53:28.750053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.627 [2024-10-14 16:53:28.750058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.627 [2024-10-14 16:53:28.751496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:24.627 [2024-10-14 16:53:28.751642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:24.627 [2024-10-14 16:53:28.751700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:24.627 [2024-10-14 16:53:28.751712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 Malloc0 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 [2024-10-14 16:53:28.916677] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 [2024-10-14 16:53:28.944914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=699449 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:24.627 16:53:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:26.589 16:53:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 699384 00:28:26.589 16:53:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Write completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.589 Read completed with error (sct=0, sc=8) 00:28:26.589 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 [2024-10-14 16:53:30.973138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 [2024-10-14 16:53:30.973338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 [2024-10-14 16:53:30.973530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Write completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 Read completed with error (sct=0, sc=8) 00:28:26.590 starting I/O failed 00:28:26.590 [2024-10-14 16:53:30.973736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:26.590 [2024-10-14 16:53:30.973971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.973993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.974206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.974216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.974445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.974477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.974630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.974663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.974946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.974977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.975167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.975199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.975462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.975493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.975668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.590 [2024-10-14 16:53:30.975700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.590 qpair failed and we were unable to recover it. 00:28:26.590 [2024-10-14 16:53:30.975913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.975945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.976139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.976149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.976373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.976405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.976589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.976629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.976834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.976865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.977055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.977065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.977148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.977158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.977323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.977333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.977549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.977559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.977739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.977749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.977828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.977837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.978052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.978062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.978136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.978145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.978400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.978411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.978562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.978572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.978774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.978785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.978937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.978947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.979020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.979030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.979188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.979198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.979466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.979476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.979618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.979629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.979781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.979791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.979886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.979895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.980041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.980051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.980268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.980298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.980563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.980594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.980861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.980872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.981063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.981073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.981259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.981290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.981552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.981583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.981732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.981763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.982021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.982031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.982232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.982247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.982418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.982428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.982659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.982692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.982810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.982841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.983094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.983126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.983333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.983363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.983629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.983663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.983902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.591 [2024-10-14 16:53:30.983912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.591 qpair failed and we were unable to recover it. 00:28:26.591 [2024-10-14 16:53:30.984078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.984088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.984170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.984179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.984321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.984334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.984483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.984496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.984751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.984765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.984918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.984931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.985077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.985091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.985360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.985390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.985661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.985693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.985975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.985988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.986216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.986229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.986424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.986437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.986644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.986658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.986837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.986850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.986989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.987972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.987984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.988131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.988144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.988407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.988437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.988646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.988679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.988951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.988982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.989159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.989189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.989457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.989488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.989683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.989714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.989952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.989983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.990186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.990199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.990342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.990355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.990437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.990452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.990591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.990615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.990763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.990776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.990878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.990891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.991087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.991116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.991309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.991341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.991516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.991547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.991816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.991849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.592 [2024-10-14 16:53:30.992130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.592 [2024-10-14 16:53:30.992161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.592 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.992348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.992361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.992589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.992605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.992763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.992777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.993009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.993022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.993184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.993197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.993418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.993450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.993736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.993769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.994004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.994034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.994311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.994342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.994623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.994655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.994783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.994813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.995051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.995082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.995278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.995299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.995545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.995566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.995830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.995851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.996119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.996140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.996334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.996355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.996590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.996616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.996861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.996882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.997117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.997138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.997390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.997411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.997620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.997643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.997860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.997881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.998110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.998131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.998368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.998389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.998634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.998655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.998839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.998860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.999074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.999095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.999326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.999347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.999509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.999529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.999721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:30.999743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:30.999990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.000015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.000174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.000195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.000367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.000388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.000548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.000568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.000819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.000853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.001037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.001068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.001303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.001334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.001515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.001545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.593 qpair failed and we were unable to recover it. 00:28:26.593 [2024-10-14 16:53:31.001757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.593 [2024-10-14 16:53:31.001788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.002048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.002079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.002269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.002299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.002560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.002591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.002882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.002914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.003185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.003216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.003395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.003415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.003652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.003685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.003898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.003932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.004176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.004206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.004418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.004448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.004721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.004754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.004972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.005002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.005180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.005201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.005423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.005454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.005643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.005675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.005860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.005891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.006067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.006088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.006318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.006348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.006472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.006503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.006643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.006674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.006865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.006896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.007076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.007106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.007362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.007383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.007620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.007642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.007793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.007814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.007920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.007941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.008048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.008070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.008253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.008274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.008538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.008568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.008861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.008893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.009107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.009137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.009359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.009395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.594 qpair failed and we were unable to recover it. 00:28:26.594 [2024-10-14 16:53:31.009577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.594 [2024-10-14 16:53:31.009620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.009805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.009836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.010073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.010103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.010282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.010303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.010542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.010562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.010811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.010844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.011112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.011142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.011383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.011414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.011679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.011713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.011979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.012009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.012201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.012232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.012432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.012463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.012653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.012686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.012867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.012898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.013091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.013121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.013310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.013341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.013597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.013646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.013818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.013850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.014034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.014065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.014328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.014358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.014619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.014651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.014792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.014823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.015977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.015998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.016183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.016204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.016476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.016496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.016723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.016746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.016915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.016937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.017110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.017131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.017303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.017333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.017447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.017478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.017705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.017738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.017927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.017948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.018046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.018072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.595 [2024-10-14 16:53:31.018248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.595 [2024-10-14 16:53:31.018269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.595 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.018448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.018469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.018639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.018671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.018846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.018877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.019116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.019147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.019409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.019441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.019687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.019719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.019912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.019943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.020155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.020175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.020420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.020441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.020551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.020573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.020828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.020863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.021133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.021164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.021349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.021380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.021623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.021656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.021926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.021957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.022195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.022227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.022465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.022495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.022748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.022780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.023003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.023033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.023298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.023329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.023578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.023617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.023906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.023927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.024095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.024117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.024337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.024358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.024609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.024641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.024784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.024816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.025021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.025051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.025354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.025375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.025561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.025582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.025837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.025858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.026101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.026122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.026355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.026376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.026533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.026554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.026818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.026850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.027024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.027054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.027270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.027300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.027504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.027535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.027717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.027750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.027921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.027962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.028154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.596 [2024-10-14 16:53:31.028185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.596 qpair failed and we were unable to recover it. 00:28:26.596 [2024-10-14 16:53:31.028455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.028477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.028643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.028666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.028889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.028910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.029059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.029099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.029297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.029327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.029511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.029542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.029814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.029847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.030028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.030049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.030317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.030347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.030528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.030559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.030835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.030867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.031130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.031160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.031402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.031433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.031673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.031706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.031925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.031946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.032101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.032122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.032353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.032374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.032485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.032506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.032759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.032781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.032883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.032904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.033074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.033095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.033370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.033401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.033703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.033735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.034001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.034031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.034167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.034197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.034464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.034495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.034735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.034766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.034956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.034987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.035183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.035215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.035427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.035458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.035745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.035778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.036051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.036073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.036287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.036308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.036525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.036546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.036796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.036818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.037089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.037109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.037372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.037393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.037563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.037584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.037791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.597 [2024-10-14 16:53:31.037820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.597 qpair failed and we were unable to recover it. 00:28:26.597 [2024-10-14 16:53:31.038005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.038037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.038305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.038337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.038557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.038587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.038814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.038846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.039064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.039096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.039307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.039331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.039515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.039538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.039788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.039810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.039972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.039993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.040256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.040288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.040422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.040453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.040580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.040621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.040862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.040894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.041109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.041143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.041405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.041436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.041683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.041717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.041901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.041932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.042166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.042186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.042356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.042378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.042549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.042580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.042810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.042842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.043046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.043077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.043365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.043386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.043613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.043635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.043809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.043830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.043993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.044015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.044285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.044358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.044557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.044656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.044884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.044922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.045122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.045155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.045428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.045459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.045663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.045697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.045906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.045937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.046124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.046156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.046371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.598 [2024-10-14 16:53:31.046403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.598 qpair failed and we were unable to recover it. 00:28:26.598 [2024-10-14 16:53:31.046631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.046656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.046896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.046918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.047112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.047133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.047353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.047375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.047542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.047568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.047752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.047786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.048077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.048110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.048300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.048321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.048592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.048639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.048884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.048915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.049165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.049187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.049428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.049450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.049643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.049665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.049910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.049931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.050113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.050134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.050301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.050323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.050597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.050639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.050780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.050812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.051062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.051093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.051300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.051331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.051523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.051554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.051813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.051847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.052086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.052117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.052341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.052373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.052578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.052615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.052819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.052851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.053118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.053149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.053400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.053433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.053736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.053770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.053908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.053939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.054085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.054106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.054264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.054285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.054463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.054484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.054611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.054634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.054878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.054900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.055069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.055090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.599 [2024-10-14 16:53:31.055243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-10-14 16:53:31.055265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.599 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.055503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.055523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.055762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.055786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.055906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.055927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.056101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.056122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.056367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.056389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.056505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.056526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.056752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.056775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.056892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.056918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.057120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.057142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.057320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.057342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.057618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.057640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.057860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.057881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.057998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.058018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.058248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.058269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.058385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.058407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.058569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.058591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.058767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.058789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.058982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.059004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.059243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.059275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.059473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.059504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.059692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.059726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.059972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.059994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.060094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.060116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.060333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.060354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.060579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.060606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.060772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.060794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.061015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.061037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.061265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.061296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.061416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.061447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.061647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.061681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.061861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.061899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.062070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.062093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.062265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.062287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.062529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.062559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.062838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.062923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.063156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.063193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.063388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.063421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.063721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.063755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.600 [2024-10-14 16:53:31.063953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-10-14 16:53:31.063985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.600 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.064122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.064153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.064338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.064363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.064589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.064617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.064847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.064869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.065120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.065150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.065409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.065440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.065637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.065671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.065882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.065913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.066099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.066136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.066388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.066409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.066654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.066677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.066900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.066921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.067084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.067106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.067294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.067325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.067540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.067570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.067720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.067754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.067995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.068026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.068216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.068238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.068479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.068500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.068656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.068679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.068846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.068889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.069095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.069127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.069336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.069368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.069587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.069630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.069877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.069909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.070076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.070098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.070339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.070360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.070585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.070613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.070814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.070836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.071002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.071023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.071135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.071156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.071401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.071421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.071591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.071619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.071841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.071862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.072094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.072124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.601 qpair failed and we were unable to recover it. 00:28:26.601 [2024-10-14 16:53:31.072268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-10-14 16:53:31.072300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.072487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.072519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.072708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.072742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.072867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.072898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.073031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.073062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.073254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.073285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.073562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.073584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.073761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.073783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.074008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.074039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.074300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.074330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.074546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.074577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.074830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.074863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.075121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.075142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.075367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.075388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.075650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.075674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.075926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.075947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.076219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.076241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.076512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.076533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.076785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.076808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.077035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.077057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.077229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.077250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.077416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.077437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.077615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.077638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.077845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.077866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.078067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.078089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.078351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.078372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.078538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.078559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.078770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.078793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.079017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.079048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.079311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.079343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.079640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.079673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.079942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.079973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.080243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.080275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.080492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.080524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.080796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.080830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.081029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.081051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.081279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.081300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.081546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.081568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.602 [2024-10-14 16:53:31.081805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.602 [2024-10-14 16:53:31.081826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.602 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.081989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.082011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.082165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.082208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.082496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.082527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.082727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.082761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.082889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.082920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.083189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.083221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.083529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.083561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.083720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.083752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.084020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.084052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.084282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.084304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.084539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.084560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.084824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.084847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.085023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.085044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.085165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.085187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.085363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.085384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.085570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.085613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.085871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.085903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.086089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.086121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.086414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.086446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.086565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.086596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.086757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.086790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.087041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.087073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.087286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.087307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.087472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.087495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.087688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.087731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.088010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.088041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.088340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.088371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.088646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.088679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.088815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.088847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.089069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.089101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.089380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.089413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.089665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.089698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.089962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.089995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.090189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.090221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.090488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.090519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.603 qpair failed and we were unable to recover it. 00:28:26.603 [2024-10-14 16:53:31.090644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.603 [2024-10-14 16:53:31.090677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.090852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.090883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.091108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.091140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.091317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.091349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.091596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.091638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.091807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.091828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.091998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.092036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.092306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.092337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.092532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.092564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.092824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.092857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.093127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.093148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.093388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.093410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.093647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.093671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.093832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.093853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.094065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.094097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.094307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.094338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.094616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.094649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.094902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.094933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.095235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.095257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.095529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.095551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.095731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.095753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.096009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.096030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.096265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.096286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.096509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.096530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.096784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.096807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.097061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.097082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.097288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.097310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.097492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.097514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.097738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.097762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.098026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.098048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.098286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.098308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.098559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.098581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.098785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.098808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.099040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.099061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.099315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.099347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.099558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.099589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.099849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.099881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.100066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.100096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.100345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.100376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.100630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.100652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.604 [2024-10-14 16:53:31.100881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.604 [2024-10-14 16:53:31.100903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.604 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.101083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.101105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.101307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.101338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.101543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.101574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.101729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.101762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.102033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.102064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.102353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.102390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.102596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.102641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.102938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.102969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.103228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.103258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.103547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.103568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.103759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.103781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.104035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.104056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.104326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.104348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.104572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.104594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.104831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.104854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.105108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.105129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.105375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.105397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.105560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.105581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.105768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.105790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.105999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.106021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.106173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.106195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.106356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.106378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.106631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.106654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.106830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.106852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.107101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.107122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.107323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.107354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.107622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.107655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.107979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.108014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.108276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.108307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.108531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.108552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.108806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.108829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.109056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.109078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.109242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.109264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.109474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.109505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.109729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.109760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.109962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.109995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.110263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.110285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.110536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.110559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.110761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.110783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.110906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.605 [2024-10-14 16:53:31.110928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.605 qpair failed and we were unable to recover it. 00:28:26.605 [2024-10-14 16:53:31.111146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.111177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.111399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.111430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.111624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.111658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.111836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.111868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.112142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.112174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.112348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.112375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.112623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.112657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.112963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.112995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.113206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.113228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.113408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.113430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.113683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.113706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.113927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.113948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.114070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.114092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.114359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.114380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.114555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.114577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.114719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.114742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.114874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.114896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.115075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.115097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.115304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.115335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.115557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.115590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.115799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.115832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.116108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.116139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.116427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.116458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.116661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.116694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.116895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.116926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.117069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.117090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.117250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.117273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.117530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.117562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.117706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.117739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.117957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.117990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.118293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.118325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.118589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.118634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.118872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.118906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.119185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.119217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.119445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.119477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.119785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.119819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.120019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.120051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.120303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.120335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.120630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.120653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.120826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.120849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.606 qpair failed and we were unable to recover it. 00:28:26.606 [2024-10-14 16:53:31.121008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.606 [2024-10-14 16:53:31.121029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.121258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.121281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.121458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.121481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.121725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.121749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.122008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.122040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.122350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.122388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.122506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.122528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.122723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.122746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.122929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.122951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.123146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.123178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.123399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.123430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.123581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.123624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.123751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.123784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.123985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.124015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.124268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.124300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.124573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.124596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.124778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.124800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.124993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.125024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.125302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.125334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.125556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.125587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.125935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.125968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.126270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.126292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.126541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.126564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.126836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.126860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.127164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.127186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.127408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.127430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.127635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.127659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.127784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.127806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.127917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.127938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.128099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.128121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.128377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.128399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.128657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.128680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.128935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.128980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.129263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.129296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.129492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.129523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.129784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.129817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.607 qpair failed and we were unable to recover it. 00:28:26.607 [2024-10-14 16:53:31.130070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.607 [2024-10-14 16:53:31.130101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.130385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.130416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.130669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.130702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.130973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.131006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.131280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.131301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.131553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.131575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.131807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.131830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.132075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.132097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.132283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.132304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.132538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.132577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.132842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.132875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.133076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.133107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.133243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.133265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.133512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.133534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.133764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.133788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.133909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.133931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.134164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.134186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.134369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.134391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.134498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.134520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.134798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.134822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.135064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.135095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.135376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.135408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.135615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.135649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.135806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.135839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.136117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.136151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.136328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.136351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.136551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.136584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.136815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.136848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.137069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.137101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.137408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.137440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.137678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.137712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.137906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.137938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.138244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.138276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.138390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.138420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.138620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.138644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.138824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.138847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.139106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.139139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.139371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.139402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.139534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.139567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.139803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.139836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.608 [2024-10-14 16:53:31.140029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.608 [2024-10-14 16:53:31.140069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.608 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.140255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.140278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.140546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.140579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.140775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.140808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.140963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.140994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.141267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.141299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.141626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.141659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.141864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.141896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.142172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.142214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.142493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.142519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.142678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.142702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.142858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.142881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.143136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.143168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.143381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.143412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.143663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.143698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.143962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.143994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.144267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.144292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.144573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.144595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.144786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.144809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.144993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.145015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.145208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.145239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.145554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.145586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.145798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.145834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.146074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.146106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.146330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.146352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.146581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.146613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.146802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.146825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.147003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.147028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.147258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.147281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.147463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.147484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.147741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.147765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.147890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.147912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.148096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.148117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.148346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.148368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.148484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.148505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.148671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.148694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.148902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.148925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.149108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.149130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.149321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.149343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.149520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.609 [2024-10-14 16:53:31.149543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.609 qpair failed and we were unable to recover it. 00:28:26.609 [2024-10-14 16:53:31.149764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.149788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.150064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.150086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.150342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.150367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.150611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.150636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.150886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.150908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.151094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.151116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.151361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.151384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.151561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.151583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.151850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.151874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.152053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.152080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.152277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.152299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.152575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.152597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.152774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.152795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.152975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.152997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.153250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.153281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.153478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.153510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.153760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.153794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.154050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.154082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.154302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.154323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.154586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.154616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.154809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.154831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.154965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.154986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.155216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.155238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.155412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.155434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.155628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.155661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.155864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.155896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.156107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.156138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.156336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.156368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.156564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.156598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.156895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.156926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.157215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.157249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.157554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.157589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.157810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.157842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.158048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.158080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.158331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.158363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.158653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.158686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.158986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.159017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.159280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.159312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.159522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.159543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.159733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.610 [2024-10-14 16:53:31.159756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.610 qpair failed and we were unable to recover it. 00:28:26.610 [2024-10-14 16:53:31.159960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.159992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.160125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.160157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.160400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.160431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.160730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.160754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.161023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.161044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.161285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.161308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.161572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.161594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.161763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.161784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.161991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.162023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.162242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.162280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.162477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.162509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.162686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.162710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.162944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.162974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.163270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.163302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.163507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.163527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.163707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.163731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.163962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.163984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.164163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.164185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.164377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.164398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.164514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.164535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.164721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.164743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.164871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.164893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.165084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.165116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.165472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.165511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.165635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.165657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.165900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.165924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.166149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.166181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.166459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.166490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.166789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.166822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.167029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.167061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.167354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.167385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.167660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.167687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.167874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.167897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.168159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.168183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.168416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.168438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.168678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.168702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.611 qpair failed and we were unable to recover it. 00:28:26.611 [2024-10-14 16:53:31.168876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.611 [2024-10-14 16:53:31.168897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.169084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.169116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.169344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.169377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.169574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.169616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.169808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.169831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.170020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.170042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.170233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.170255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.170426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.170448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.170729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.170762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.170948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.170980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.171235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.171266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.171484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.171506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.171736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.171760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.171945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.171971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.172147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.172169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.172355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.172377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.172637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.172660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.172888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.172910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.173080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.173103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.173339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.173362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.173628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.173661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.173879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.173910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.174106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.174138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.174381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.174403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.174587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.174619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.174887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.174909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.175081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.175104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.175310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.175332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.175567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.175599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.175867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.175898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.176037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.176068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.176343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.176375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.176569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.176612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.176828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.176851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.177011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.177033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.177145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.177166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.177399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.177422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.177674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.177698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.177866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.177888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.612 [2024-10-14 16:53:31.178143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.612 [2024-10-14 16:53:31.178165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.612 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.178342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.178364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.178547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.178578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.178868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.178900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.179146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.179177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.179429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.179459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.179769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.179803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.180031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.180063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.180257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.180288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.180566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.180597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.180840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.180862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.181021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.181042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.181220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.181251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.181511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.181542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.181736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.181775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.181960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.181992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.182247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.182279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.182557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.182588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.182793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.182824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.183043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.183075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.183203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.183242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.183473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.183495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.183751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.183775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.184023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.184045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.184281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.184304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.184569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.184591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.184779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.184801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.185055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.185076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.185204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.185226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.185410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.185452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.185756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.185790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.185998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.186030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.186155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.186186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.186328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.186360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.613 [2024-10-14 16:53:31.186578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.613 [2024-10-14 16:53:31.186621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.613 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.186825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.186856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.187156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.187188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.187498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.187529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.187746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.187780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.187909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.187940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.188142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.188174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.188406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.188439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.188693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.188716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.188814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.188835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.189107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.189138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.189350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.189381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.189593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.189639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.189783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.189805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.190064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.190097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.190379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.190411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.190632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.190655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.190771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.190793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.190975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.190996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.191206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.191228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.191391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.191417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.191580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.191623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.191944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.191968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.192062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.192083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.192216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.192237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.192497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.192528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.192676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.192710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.192919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.192951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.193081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.193113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.193316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.193348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.193614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.193652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.193837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.193858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.194040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.194062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.194297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.194328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.194618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.194652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.194883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.194914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.195169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.614 [2024-10-14 16:53:31.195200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.614 qpair failed and we were unable to recover it. 00:28:26.614 [2024-10-14 16:53:31.195471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.195493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.195673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.195696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.195815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.195836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.196064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.196086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.196202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.196224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.196324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.196346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.196612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.196634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.196761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.196782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.196942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.196964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.197144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.197165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.197347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.197369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.197628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.197661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.197868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.197900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.198084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.198114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.198255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.198287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.198564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.198596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.198836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.198869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.199085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.199116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.199324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.199355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.199555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.199586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.199865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.199897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.200096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.200128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.200446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.200468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.200744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.200768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.201009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.201041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.201236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.201269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.201414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.201447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.201746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.201781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.202008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.202041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.202170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.202203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.202343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.202375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.202572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.202595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.202782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.202806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.202986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.203008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.203262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.203284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.203482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.203505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.203632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.615 [2024-10-14 16:53:31.203656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.615 qpair failed and we were unable to recover it. 00:28:26.615 [2024-10-14 16:53:31.203895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.203918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.204028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.204051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.204298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.204320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.204552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.204575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.204847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.204870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.205001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.205024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.205162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.205186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.205281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.205302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.205488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.205521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.205723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.205758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.205982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.206014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.206242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.206274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.206459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.206482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.206706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.206733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.206864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.206886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.207007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.207030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.207266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.207298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.207519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.207553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.207823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.207858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.208148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.208181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.208326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.208357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.208661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.208695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.616 [2024-10-14 16:53:31.208830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.616 [2024-10-14 16:53:31.208853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.616 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.208980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.209124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.209332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.209463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.209653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.209852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.906 [2024-10-14 16:53:31.209970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.906 [2024-10-14 16:53:31.209992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.906 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.210124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.210146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.210344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.210366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.210489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.210511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.210701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.210724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.210959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.210981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.211108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.211129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.211422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.211444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.211692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.211732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.211946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.211968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.212094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.212116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.212244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.212266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.212499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.212521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.212713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.212759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.212962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.212993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.213232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.213264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.213538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.213560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.213707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.213731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.213926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.213958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.214095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.214125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.214422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.214452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.214663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.214686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.214823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.214845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.214951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.214973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.215107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.215132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.215224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.215244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.215422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.215443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.215634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.215668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.215873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.215905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.216157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.216188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.216464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.216496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.907 [2024-10-14 16:53:31.216661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.907 [2024-10-14 16:53:31.216696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.907 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.216850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.216881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.217067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.217098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.217236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.217257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.217414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.217436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.217597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.217630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.217860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.217883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.218018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.218040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.218162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.218183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.218458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.218479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.218654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.218678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.218843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.218864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.218994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.219015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.219206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.219227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.219432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.219454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.219560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.219582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.219867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.219890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.219989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.220011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.220176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.220198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.220316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.220338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.220638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.220672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.220807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.220837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.221037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.221068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.221224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.221254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.221527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.221558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.221772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.221794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.221922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.221946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.222069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.222090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.222271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.222293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.222459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.222482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.222661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.222703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.222921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.222953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.223086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.223118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.223346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.223383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.223520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.223552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.223749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.223783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.908 [2024-10-14 16:53:31.223982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.908 [2024-10-14 16:53:31.224015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.908 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.224144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.224174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.224321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.224352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.224554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.224586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.224801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.224823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.225075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.225096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.225304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.225326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.225439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.225461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.225645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.225668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.225950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.225972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.226099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.226121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.226342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.226364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.226630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.226653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.226886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.226908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.227033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.227055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.227189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.227210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.227467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.227490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.227664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.227688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.227847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.227868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.227985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.228007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.228140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.228162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.228411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.228434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.228548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.228570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.228700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.228724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.228908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.228931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.229061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.229083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.229311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.229334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.229565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.229587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.229776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.229799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.230059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.230082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.230303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.230326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.230577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.230610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.230856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.909 [2024-10-14 16:53:31.230878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.909 qpair failed and we were unable to recover it. 00:28:26.909 [2024-10-14 16:53:31.231111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.231133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.231323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.231345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.231540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.231562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.231704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.231726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.231906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.231932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.232099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.232121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.232326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.232348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.232510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.232532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.232713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.232736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.232920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.232942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.233057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.233078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.233211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.233234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.233478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.233500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.233625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.233648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.233890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.233913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.234014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.234036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.234213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.234234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.234490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.234512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.234755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.234778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.234914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.234936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.235144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.235166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.235338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.235360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.235593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.235622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.235748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.235770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.235935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.235957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.236189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.236210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.236368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.236389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.236612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.236635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.236773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.236794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.236999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.237021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.910 [2024-10-14 16:53:31.237135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.910 [2024-10-14 16:53:31.237156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.910 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.237364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.237387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.237551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.237573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.237865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.237887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.238066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.238087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.238326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.238347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.238584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.238615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.238863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.238886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.238994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.239015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.239267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.239289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.239545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.239567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.239756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.239779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.239978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.239999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.240137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.240159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.240348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.240374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.240561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.240583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.240842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.240864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.241070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.241092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.241282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.241304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.241561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.241584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.241790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.241812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.242018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.242041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.242338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.242360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.242592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.242623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.242810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.242832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.243065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.243086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.243349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.243371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.243553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.243575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.243779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.243802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.243978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.243999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.244156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.244180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.911 [2024-10-14 16:53:31.244303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.911 [2024-10-14 16:53:31.244324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.911 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.244507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.244530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.244731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.244755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.244917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.244938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.245045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.245067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.245268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.245289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.245490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.245512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.245718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.245741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.245876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.245897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.246072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.246094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.246438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.246459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.246714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.246737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.246911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.246932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.247041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.247063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.247174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.247195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.247427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.247449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.247566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.247587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.247796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.247819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.247951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.247973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.248095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.248117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.248325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.248346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.248514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.248538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.248742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.248766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.248892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.248918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.249096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.249117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.912 qpair failed and we were unable to recover it. 00:28:26.912 [2024-10-14 16:53:31.249487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.912 [2024-10-14 16:53:31.249509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.249779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.249803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.250015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.250037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.250223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.250245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.250422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.250444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.250689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.250713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.250926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.250947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.251184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.251206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.251441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.251462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.251696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.251719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.251860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.251881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.252066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.252088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.252262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.252284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.252554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.252576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.252774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.252796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.252921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.252943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.253157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.253179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.253369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.253390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.253624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.253648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.253761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.253782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.254014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.254036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.254292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.254313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.254491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.254513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.254747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.254771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.254895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.254917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.255166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.255189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.255315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.255337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.255570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.255592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.255732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.255754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.255914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.255936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.256100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.256121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.256301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.256323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.256503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.256523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.256724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.256748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.256989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.257012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.913 qpair failed and we were unable to recover it. 00:28:26.913 [2024-10-14 16:53:31.257264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.913 [2024-10-14 16:53:31.257287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.257461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.257484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.257709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.257733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.257865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.257892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.258059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.258081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.258204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.258226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.258478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.258499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.258715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.258739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.258899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.258921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.259043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.259064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.259271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.259292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.259464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.259486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.259737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.259760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.260006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.260027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.260350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.260372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.260569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.260591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.260734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.260757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.260874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.260896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.261077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.261100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.261278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.261299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.261546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.261569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.261722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.261744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.261921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.261943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.262211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.262233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.262394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.262415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.262675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.262699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.262896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.262918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.263061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.263084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.263333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.263355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.263617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.263640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.263833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.263856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.263983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.914 [2024-10-14 16:53:31.264005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.914 qpair failed and we were unable to recover it. 00:28:26.914 [2024-10-14 16:53:31.264245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.264267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.264380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.264402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.264576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.264598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.264738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.264761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.264948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.264970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.265230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.265253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.265450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.265472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.265579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.265608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.265781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.265803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.265946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.265969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.266209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.266231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.266435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.266461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.266588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.266623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.266814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.266836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.266955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.266976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.267153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.267175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.267276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.267299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.267480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.267502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.267693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.267717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.267913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.267934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.268063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.268203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.268384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.268513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.268737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.268890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.268989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.269011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.269121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.269143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.269328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.269350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.269523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.915 [2024-10-14 16:53:31.269545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.915 qpair failed and we were unable to recover it. 00:28:26.915 [2024-10-14 16:53:31.269703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.269726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.269853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.269874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.270036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.270057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.270228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.270250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.270508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.270529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.270735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.270760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.270945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.270967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.271134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.271156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.271362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.271405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.271697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.271725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.271897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.271920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.272109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.272134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.272247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.272269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.272392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.272414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.272591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.272620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.272779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.272795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.272916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.272931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.273095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.273110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.273220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.273236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.273347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.273362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.273618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.273634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.273815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.273838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.274014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.274038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.274249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.274274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.274439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.274462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.274698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.274722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.274850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.274867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.275016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.275031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.275144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.275159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.275342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.275358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.275501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.275516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.916 [2024-10-14 16:53:31.275758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.916 [2024-10-14 16:53:31.275775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.916 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.275879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.275896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.276016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.276032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.276201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.276225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.276476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.276500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.276697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.276721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.276977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.277001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.277115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.277134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.277406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.277424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.277511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.277525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.277746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.277763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.277863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.277877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.277992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.278007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.278126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.278141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.278338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.278360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.278554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.278579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.278783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.278808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.279078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.279101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.279390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.279412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.279596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.279628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.279763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.279786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.279920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.279941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.280068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.280089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.280285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.280307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.917 qpair failed and we were unable to recover it. 00:28:26.917 [2024-10-14 16:53:31.280512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.917 [2024-10-14 16:53:31.280533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.280702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.280726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.280847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.280868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.280983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.281005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.281101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.281120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.281358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.281380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.281497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.281523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.281759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.281781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.282009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.282030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.282157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.282177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.282423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.282444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.282621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.282645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.282765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.282785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.282958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.282980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.283140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.283163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.283434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.283455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.283638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.283660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.283780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.283800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.283923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.283944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.284125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.284146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.284335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.284357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.284547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.284569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.284707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.284730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.284966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.284988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.285265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.285288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.285394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.285414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.285510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.285530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.285696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.285719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.285932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.285954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.286074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.286095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.286196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.286218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.286338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.286359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.286555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.918 [2024-10-14 16:53:31.286575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.918 qpair failed and we were unable to recover it. 00:28:26.918 [2024-10-14 16:53:31.286792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.286816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.287006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.287026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.287277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.287299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.287495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.287517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.287687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.287709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.287895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.287917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.288091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.288112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.288389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.288411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.288666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.288688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.288875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.288896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.289029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.289051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.289169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.289190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.289435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.289458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.289628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.289655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.289836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.289858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.290030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.290051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.290163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.290184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.290459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.290480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.290591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.290625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.290830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.290851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.290980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.291002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.291112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.291135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.291350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.291372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.291622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.291646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.291820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.291841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.291979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.292001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.292184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.292206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.292444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.292465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.292680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.292703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.292892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.919 [2024-10-14 16:53:31.292916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.919 qpair failed and we were unable to recover it. 00:28:26.919 [2024-10-14 16:53:31.293022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.293044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.293158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.293180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.293357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.293379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.293554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.293576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.293768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.293791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.293974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.293996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.294128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.294150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.294437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.294457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.294643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.294670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.294854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.294878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.295071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.295107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.295303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.295321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.295512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.295529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.295726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.295747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.295971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.295999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.296140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.296165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.296389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.296417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.296669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.296695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.296884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.296906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.297013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.297029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.297147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.297164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.297336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.297353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.297590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.297614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.297824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.297848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.297960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.297977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.298084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.298100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.298200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.298220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.298422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.298447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.298706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.298735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.298891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.298926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.920 [2024-10-14 16:53:31.299088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.920 [2024-10-14 16:53:31.299120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.920 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.299412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.299450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.299682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.299718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.299879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.299912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.300172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.300204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.300553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.300589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.300758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.300792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.300944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.300977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.301127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.301160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.301387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.301420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.301656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.301692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.301898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.301932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.302190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.302224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.302513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.302546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.302798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.302831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.303039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.303074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.303300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.303332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.303562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.303595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.303737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.303769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.303972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.304004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.304228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.304253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.304440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.304461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.304642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.304666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.304844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.304865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.305077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.305098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.305222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.305243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.305415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.305437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.305556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.305577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.305737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.305760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.305938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.305961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.306128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.306150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.306256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.921 [2024-10-14 16:53:31.306278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.921 qpair failed and we were unable to recover it. 00:28:26.921 [2024-10-14 16:53:31.306429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.306451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.306682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.306705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.306825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.306848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.306949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.306971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.307102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.307123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.307297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.307319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.307484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.307505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.307752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.307776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.307951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.307973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.308084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.308105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.308296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.308317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.308594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.308632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.308810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.308831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.308943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.308965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.309071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.309092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.309323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.309344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.309567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.309589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.309741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.309764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.309878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.309900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.310086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.310107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.310409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.310430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.310598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.310631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.310854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.310875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.310992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.311013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.311224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.311246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.311440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.311462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.311741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.311764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.311941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.311963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.312090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.312117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.312331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.312353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.312525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.922 [2024-10-14 16:53:31.312547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.922 qpair failed and we were unable to recover it. 00:28:26.922 [2024-10-14 16:53:31.312756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.312778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.312955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.312977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.313102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.313123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.313222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.313243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.313497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.313518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.313750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.313773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.314053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.314076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.314277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.314298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.314411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.314432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.314637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.314660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.314840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.314862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.315037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.315059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.315191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.315213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.315471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.315492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.315667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.315706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.315904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.315926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.316110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.316131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.316433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.316454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.316640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.316662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.316831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.316852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.316989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.317011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.317196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.317217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.317390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.317412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.317648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.317671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.317868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.317891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.318013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.318035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.318146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.318167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.318336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.318358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.318525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.923 [2024-10-14 16:53:31.318546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.923 qpair failed and we were unable to recover it. 00:28:26.923 [2024-10-14 16:53:31.318802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.318826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.319009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.319030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.319151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.319172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.319438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.319459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.319737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.319760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.319895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.319917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.320047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.320068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.320244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.320266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.320467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.320493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.320686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.320709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.320965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.320986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.321169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.321191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.321352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.321374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.321613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.321635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.321739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.321761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.321924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.321948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.322129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.322150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.322274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.322295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.322491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.322513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.322710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.322733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.322911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.322932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.323112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.323133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.323328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.323349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.323452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.323474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.323711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.323734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.323844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.323866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.324059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.324081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.324292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.324315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.324587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.324626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.324860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.324882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.325006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.325028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.325187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.325208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.325406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.325428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.325596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.325628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.325749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.924 [2024-10-14 16:53:31.325771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.924 qpair failed and we were unable to recover it. 00:28:26.924 [2024-10-14 16:53:31.325951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.325973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.326148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.326170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.326478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.326500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.326759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.326781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.326980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.327002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.327270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.327291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.327524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.327545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.327723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.327746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.327882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.327904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.328005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.328026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.328211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.328233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.328394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.328416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.328699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.328722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.328971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.328997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.329184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.329206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.329462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.329483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.329609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.329632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.329884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.329906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.330097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.330119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.330306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.330327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.330516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.330538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.330776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.330798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.330932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.330953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.331149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.331170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.331405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.331427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.331613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.331636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.331759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.331781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.331988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.332011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.332180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.332200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.332399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.332420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.332747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.332770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.332953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.925 [2024-10-14 16:53:31.332974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.925 qpair failed and we were unable to recover it. 00:28:26.925 [2024-10-14 16:53:31.333151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.333172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.333307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.333329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.333560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.333581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.333845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.333923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.334160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7bbb0 is same with the state(6) to be set 00:28:26.926 [2024-10-14 16:53:31.334435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.334469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.334672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.334699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.334909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.334931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.335108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.335136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.335283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.335307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.335529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.335544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.335700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.335716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.335815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.335828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.335986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.336930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.336950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.337915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.337928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.338085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.338101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.338194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.338207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.338283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.338296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.338451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.338465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.926 [2024-10-14 16:53:31.338539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.926 [2024-10-14 16:53:31.338552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.926 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.338719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.338734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.338841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.338859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.339087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.339109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.339276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.339299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.339411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.339432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.339592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.339632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.339809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.339830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.339930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.339950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.340941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.340973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.341102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.341128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.341243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.341270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.341380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.341407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.341661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.341692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.341818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.341847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.341959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.341987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.342161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.342190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.342368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.342397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.342525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.342552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.342708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.342737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.342937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.342964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.343070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.343099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.343219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.343247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.343449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.343475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.343661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.343692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.343831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.343858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.344052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.344079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.344265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.344296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.344470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.344499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.344627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.344655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.344758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.344786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.344889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.344917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.927 [2024-10-14 16:53:31.345027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.927 [2024-10-14 16:53:31.345055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.927 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.345167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.345194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.345315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.345343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.345526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.345553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.345752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.345828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.346070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.346125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.346328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.346354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.346466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.346487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.346613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.346637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.346748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.346770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.346885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.346906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.347883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.347912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.348945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.348966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.349137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.349159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.349271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.349293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.928 qpair failed and we were unable to recover it. 00:28:26.928 [2024-10-14 16:53:31.349392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.928 [2024-10-14 16:53:31.349412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.349577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.349598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.349799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.349821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.349927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.349950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.350041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.350063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.350153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.350175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.350280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.350302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.350390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.350411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.350648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.350671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.350872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.350894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.351911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.351931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.352873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.352895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.353935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.353957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.354050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.354073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.354253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.354276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.354397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.929 [2024-10-14 16:53:31.354419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.929 qpair failed and we were unable to recover it. 00:28:26.929 [2024-10-14 16:53:31.354545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.354568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.354689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.354713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.354811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.354833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.354922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.354944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.355845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.355868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.356068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.356089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.356179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.356200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.356397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.356419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.356614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.356637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.356727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.356748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.356880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.356902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.357864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.357890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.358050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.358071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.358169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.358191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.358285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.358306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.358401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.358423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.358518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.358539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.930 qpair failed and we were unable to recover it. 00:28:26.930 [2024-10-14 16:53:31.358645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.930 [2024-10-14 16:53:31.358669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.358785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.358807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.358908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.358929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.359894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.359916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.360159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.360182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.360361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.360383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.360541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.360563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.360699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.360721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.360818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.360840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.360934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.360956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.361864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.361994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.362892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.362990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.931 [2024-10-14 16:53:31.363011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.931 qpair failed and we were unable to recover it. 00:28:26.931 [2024-10-14 16:53:31.363196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.363218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.363378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.363400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.363563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.363588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.363721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.363744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.363849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.363870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.364970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.364991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.365152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.365174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.365269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.365290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.365385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.365406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.365517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.365540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.365718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.365741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.365906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.365929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.366835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.366992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.367013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.367100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.367122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.367233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.367255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.932 [2024-10-14 16:53:31.367415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.932 [2024-10-14 16:53:31.367436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.932 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.367527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.367550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.367642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.367664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.367762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.367784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.367885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.367907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.368914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.368936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.369025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.369046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.369132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.369157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.369318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.369339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.369468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.369490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.369731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.369754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.369917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.369939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.370881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.370902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.371069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.371090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.371192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.371212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.371301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.371323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.371428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.371449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.371621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.933 [2024-10-14 16:53:31.371645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.933 qpair failed and we were unable to recover it. 00:28:26.933 [2024-10-14 16:53:31.371739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.371761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.371868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.371889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.372065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.372203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.372327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.372436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.372556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.372804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.372980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.373957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.373979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.374958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.374980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.375068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.375094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.375251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.375272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.375499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.375521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.375680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.375702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.375823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.375845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.376006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.376027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.376119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.376141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.376305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.376326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.934 qpair failed and we were unable to recover it. 00:28:26.934 [2024-10-14 16:53:31.376429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.934 [2024-10-14 16:53:31.376450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.376548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.376572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.376685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.376707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.376884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.376906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.377891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.377977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.378089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.378262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.378375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.378599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.378720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.378919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.378942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.379114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.379295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.379419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.379528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.379641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.379839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.379996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.380018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.380103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.380125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.380206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.380227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.380331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.380352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.380436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.380457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.380649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.935 [2024-10-14 16:53:31.380672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.935 qpair failed and we were unable to recover it. 00:28:26.935 [2024-10-14 16:53:31.380772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.380792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.380886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.380908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.381085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.381110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.381269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.381289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.381389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.381410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.381589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.381619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.381729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.381751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.381869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.381891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.382901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.382922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.383898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.383988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.384009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.384171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.384192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.384291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.384312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.384399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.384420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.384644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.384667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.384843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.384864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.936 [2024-10-14 16:53:31.385046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.936 [2024-10-14 16:53:31.385068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.936 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.385973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.385993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.386093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.386115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.386365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.386386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.386494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.386515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.386746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.386769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.386868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.386889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.386986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.387172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.387289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.387503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.387620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.387799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.387927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.387947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.388135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.388156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.388249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.388271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.388373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.388393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.388572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.388593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.388708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.388730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.388881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.388902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.389078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.389099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.389261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.389282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.389374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.389395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.389645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.389667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.389772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.389793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.389960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.389981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.937 qpair failed and we were unable to recover it. 00:28:26.937 [2024-10-14 16:53:31.390091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.937 [2024-10-14 16:53:31.390112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.390222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.390243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.390434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.390456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.390642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.390665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.390786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.390808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.390983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.391004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.391225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.391247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.391468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.391488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.391592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.391636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.391789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.391810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.392029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.392050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.392224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.392246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.392397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.392419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.392582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.392612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.392757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.392778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.392877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.392898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.393001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.393023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.393108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.393129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.393291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.393313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.393416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.393436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.393531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.393552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.393716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.393744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.394010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.394031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.394137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.394158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.394334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.394356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.394537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.394558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.394733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.394756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.394924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.394945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.395125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.395145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.395240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.938 [2024-10-14 16:53:31.395261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.938 qpair failed and we were unable to recover it. 00:28:26.938 [2024-10-14 16:53:31.395361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.395382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.395543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.395565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.395820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.395842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.395945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.395967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.396157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.396178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.396270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.396291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.396409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.396431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.396618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.396640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.396743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.396765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.396940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.396961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.397046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.397067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.397221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.397242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.397347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.397367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.397477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.397499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.397650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.397674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.397778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.397798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.398003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.398025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.398196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.398218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.398392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.398414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.398617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.398639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.398738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.398760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.398852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.398872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.399105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.399126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.399211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.399233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.399349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.399370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.399556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.399577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.399682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.399704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.399812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.399833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.400002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.400024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.400188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.400209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.939 [2024-10-14 16:53:31.400311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.939 [2024-10-14 16:53:31.400332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.939 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.400421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.400446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.400550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.400571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.400663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.400684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.400775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.400797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.400960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.400980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.401972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.401993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.402869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.402890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.403926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.403948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.404968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.404988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.405975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.940 [2024-10-14 16:53:31.405996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.940 qpair failed and we were unable to recover it. 00:28:26.940 [2024-10-14 16:53:31.406085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.406276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.406385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.406581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.406723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.406829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.406976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.406998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.407192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.407314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.407519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.407652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.407779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.407893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.407985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.408006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.408185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.408206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.408302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.408322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.408409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.408431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.408620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.408642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.408812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.408834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.409060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.409081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.409195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.409217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.409384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.409405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.409575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.409596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.409706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.409727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.409916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.409937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.410101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.410121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.410227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.410248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.410412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.410434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.410596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.410626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.410784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.410806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.410913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.410935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.941 [2024-10-14 16:53:31.411880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.941 qpair failed and we were unable to recover it. 00:28:26.941 [2024-10-14 16:53:31.411971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.411991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.412956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.412977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.413073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.413094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.413183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.413204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.413288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.413310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.413528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.413549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.413710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.413732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.413836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.413857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.414048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.414164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.414275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.414452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.414634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.414838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.414994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.415933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.415954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.416069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.416090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.416201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.416222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.416315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.416336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.942 [2024-10-14 16:53:31.416432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.942 [2024-10-14 16:53:31.416453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.942 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.416551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.416572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.416755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.416778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.416873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.416894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.417945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.417970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.418059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.418079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.418298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.418320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.418421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.418443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.418573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.418594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.418699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.418743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.418898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.418919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.419871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.419893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.420079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.420100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.420186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.420207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.420361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.420382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.420567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.420588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.420686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.420707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.420878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.420898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.421117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.421139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.421241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.943 [2024-10-14 16:53:31.421261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.943 qpair failed and we were unable to recover it. 00:28:26.943 [2024-10-14 16:53:31.421350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.421371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.421476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.421497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.421573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.421593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.421710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.421732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.421838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.421859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.421967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.421988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.422158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.422179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.422362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.422384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.422613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.422635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.422791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.422812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.422973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.422995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.423084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.423105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.423258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.423279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.423430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.423451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.423536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.423558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.423643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.423665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.423821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.423841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.424816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.424830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.425031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.425044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.425122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.944 [2024-10-14 16:53:31.425134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.944 qpair failed and we were unable to recover it. 00:28:26.944 [2024-10-14 16:53:31.425281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.425378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.425476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.425575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.425762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.425863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.425969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.425988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.426982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.426993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.427971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.427990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.428172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.428268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.428381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.428482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.428680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.945 [2024-10-14 16:53:31.428761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.945 qpair failed and we were unable to recover it. 00:28:26.945 [2024-10-14 16:53:31.428830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.428842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.428922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.428935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.429921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.429940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.430856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.430870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.431977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.431991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.432151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.432173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.432273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.432298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.432392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.432413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.432502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.432522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.432605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.946 [2024-10-14 16:53:31.432626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.946 qpair failed and we were unable to recover it. 00:28:26.946 [2024-10-14 16:53:31.432716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.432736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.432856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.432874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.433825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.433988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.434961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.434981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.435881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.435894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.436050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.436212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.436359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.436468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.436558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.947 [2024-10-14 16:53:31.436720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.947 qpair failed and we were unable to recover it. 00:28:26.947 [2024-10-14 16:53:31.436892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.436914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.437934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.437949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.438943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.438963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.439965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.439979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.440891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.440909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.441057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.441075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.948 [2024-10-14 16:53:31.441169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.948 [2024-10-14 16:53:31.441195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.948 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.441357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.441384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.441565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.441592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.441790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.441816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.441917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.441944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.442122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.442150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.442292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.442317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.442418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.442445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.442619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.442647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.442857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.442884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.442987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.443106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.443293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.443416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.443680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.443806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.443926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.443952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.444116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.444142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.444324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.444352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.444536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.444563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.444691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.444719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.444838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.444865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.445096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.445122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.445297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.445325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.445466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.445493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.445662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.445689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.445792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.445818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.949 [2024-10-14 16:53:31.446001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.949 [2024-10-14 16:53:31.446027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.949 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.446959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.446986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.447908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.447935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.448100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.448126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.448293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.448319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.448411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.448438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.448545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.448571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.448698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.448726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.448851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.448877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.449038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.449065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.449240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.449267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.449376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.449402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.449567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.449593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.449783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.449812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.449925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.449957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.450937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.950 [2024-10-14 16:53:31.450955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.950 qpair failed and we were unable to recover it. 00:28:26.950 [2024-10-14 16:53:31.451049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.451938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.451951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.452952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.452972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.453921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.453989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.454836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.454858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.455075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.455094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.455240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.455259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.455334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.455352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.951 qpair failed and we were unable to recover it. 00:28:26.951 [2024-10-14 16:53:31.455427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.951 [2024-10-14 16:53:31.455445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.455540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.455558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.455718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.455739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.455818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.455831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.455897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.455908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.455966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.455978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.456936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.456947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.457976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.457986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.458931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.458993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.952 [2024-10-14 16:53:31.459755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.952 qpair failed and we were unable to recover it. 00:28:26.952 [2024-10-14 16:53:31.459858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.459876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.460976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.460990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.461904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.461997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.462839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.462853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.463006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.463021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.463231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.463245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.463404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.463418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.463504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.463518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.463637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.463660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.463748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.463770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.464025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.464049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.953 [2024-10-14 16:53:31.464157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.953 [2024-10-14 16:53:31.464183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.953 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.464278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.464299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.464466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.464486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.464695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.464711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.464811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.464826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.464906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.464920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.465902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.465996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.466839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.466854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.467964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.467985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.468070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.468091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.954 [2024-10-14 16:53:31.468257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.954 [2024-10-14 16:53:31.468279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.954 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.468443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.468464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.468625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.468647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.468744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.468766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.468981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.469003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.469081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.469102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.469196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.469218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.469437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.469457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.469681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.469714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.469869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.469891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.469983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.470936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.470954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.471039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.471137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.471336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.471448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.471717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.471910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.471999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.472019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.472204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.472225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.472348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.472379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.472562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.472592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.472779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.472810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.472932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.472962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.473060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.473089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.473261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.473291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.473395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.473423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.473632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.473665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.473855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.473884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.474000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.474031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.955 [2024-10-14 16:53:31.474142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.955 [2024-10-14 16:53:31.474170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.955 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.474271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.474301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.474465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.474494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.474617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.474647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.474869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.474901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.475018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.475047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.475227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.475255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.475372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.475401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.475595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.475637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.475754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.475782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.475962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.475992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.476175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.476211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.476383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.476412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.476591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.476632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.476741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.476770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.477933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.477954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.478878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.478900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.479052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.479072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.479231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.479252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.479436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.479457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.479674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.479696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.479784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.479804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.480020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.480041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.480145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.480165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.480262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.480283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.480441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.480513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.480651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.480692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.480894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.480925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.956 [2024-10-14 16:53:31.481111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-10-14 16:53:31.481142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.956 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.481374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.481406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.481505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.481535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.481772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.481806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.481978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.481995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.482957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.482974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.483088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.483300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.483417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.483533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.483705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.483829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.483985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.484922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.484938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.485907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.485929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.486160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.486183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.486349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.486371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.486507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.486550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.486745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.486779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.486894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.486919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.487018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.487039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.487144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.487165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.487247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.487267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.957 qpair failed and we were unable to recover it. 00:28:26.957 [2024-10-14 16:53:31.487347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-10-14 16:53:31.487368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.487517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.487532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.487784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.487801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.487909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.487929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.488033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.488056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.488140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.488163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.488313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.488335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.488424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.488447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.488719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.488744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.488833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.488851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.489977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.489991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.490075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.490090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.490158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.490178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.490394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.490417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.490617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.490641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.490808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.490830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.490920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.490943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.491812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.491831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.492058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.492078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.492250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.492270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.492371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.492398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.492577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.492630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.958 qpair failed and we were unable to recover it. 00:28:26.958 [2024-10-14 16:53:31.492740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.958 [2024-10-14 16:53:31.492771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.492890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.492917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.493149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.493180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.493300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.493329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.493511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.493541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.493708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.493740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.493917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.493946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.494057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.494086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.494266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.494296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.494412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.494436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.494528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.494549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.494768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.494790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.494919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.494940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.495912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.495933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.496883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.496904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.959 qpair failed and we were unable to recover it. 00:28:26.959 [2024-10-14 16:53:31.497005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.959 [2024-10-14 16:53:31.497025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.497204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.497226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.497378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.497399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.497483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.497504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.497595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.497627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.497788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.497810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.497989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.498930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.498950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.499056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.499076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.499173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.499195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.499274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.499295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.499406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.499427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.499585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.499612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.499828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.499849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.500038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.500160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.500265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.500437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.500569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.500836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.500984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.501199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.501318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.501432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.501628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.501802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.501977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.501998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.502082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.502103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.502197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.502217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.502400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.502421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.502528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.502560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.502742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.502768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.502972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.502997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.503198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.503223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.960 [2024-10-14 16:53:31.503329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.960 [2024-10-14 16:53:31.503353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.960 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.503459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.503483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.503657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.503681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.503838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.503863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.504020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.504046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.504217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.504240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.504405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.504430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.504588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.504624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.504806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.504831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.504933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.504956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.505904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.505927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.506864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.506887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.507064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.507088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.507266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.507291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.507483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.507507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.507620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.507645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.507744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.507768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.507930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.507953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.508940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.508963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.509129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.509153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.509263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.509286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.509504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.509531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.961 [2024-10-14 16:53:31.509648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.961 [2024-10-14 16:53:31.509667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.961 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.509751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.509767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.509850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.509865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.510913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.510929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.511089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.511114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.511224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.511248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.511411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.511434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.511542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.511567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:26.962 [2024-10-14 16:53:31.511667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.962 [2024-10-14 16:53:31.511691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:26.962 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.511789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.511815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.511902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.511925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.512899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.512997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.513985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.513999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.250 qpair failed and we were unable to recover it. 00:28:27.250 [2024-10-14 16:53:31.514134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.250 [2024-10-14 16:53:31.514149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.514235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.514248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.514333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.514352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.514504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.514523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.514677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.514698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.514796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.514815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.514918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.514938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.515965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.515978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.516904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.516922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.517960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.517974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.518957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.518976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.519068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.519090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.519193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.519214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.519312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.519333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.251 qpair failed and we were unable to recover it. 00:28:27.251 [2024-10-14 16:53:31.519426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.251 [2024-10-14 16:53:31.519446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.519621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.519641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.519785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.519800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.519936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.519950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.520904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.520918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.521007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.521026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.521243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.521266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.521371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.521393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.521488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.521508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.521667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.521694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.521847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.521865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.522941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.522957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.523175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.523202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.523296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.523321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.523414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.523441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.523616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.523657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.523767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.523793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.523958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.523986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.524095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.524120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.524290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.524317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.524479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.524505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.524663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.524692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.524810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.524836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.524944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.524970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.525150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.525176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.525287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.525314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.252 [2024-10-14 16:53:31.525424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.252 [2024-10-14 16:53:31.525451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.252 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.525612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.525639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.525770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.525796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.525889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.525915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.526023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.526048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.526218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.526244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.526406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.526432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.526611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.526638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.526752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.526779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.526941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.526967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.527125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.527151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.527336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.527364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.527522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.527549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.527669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.527695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.527886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.527912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.528028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.528061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.528166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.528192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.528292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.528319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.528547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.528576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.528696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.528721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.528914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.528941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.529959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.529984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.530153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.530181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.530371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.530399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.530512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.530539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.530643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.530671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.530901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.530927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.531097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.531125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.531308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.531335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.531497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.531522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.531752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.531781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.531897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.531924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.532100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.532126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.532218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.532244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.532411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.253 [2024-10-14 16:53:31.532432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.253 qpair failed and we were unable to recover it. 00:28:27.253 [2024-10-14 16:53:31.532580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.532606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.532763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.532833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.533875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.533889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.534970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.534983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.535927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.535941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.254 [2024-10-14 16:53:31.536730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.254 [2024-10-14 16:53:31.536743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.254 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.536810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.536824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.536898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.536911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.536989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.537955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.537967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.538970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.538983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.539061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.539077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.539211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.539225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.539499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.539530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.539776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.539811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.539928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.539959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.540908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.540995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.541086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.541171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.541288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.541379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.541479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.255 [2024-10-14 16:53:31.541629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.255 [2024-10-14 16:53:31.541644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.255 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.541722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.541736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.541830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.541844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.541914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.541927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.542933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.542958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.543925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.543956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.544071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.544101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.544252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.544283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.544544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.544565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.544665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.544694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.544797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.544818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.544907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.544928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.545044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.545266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.545418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.545565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.545725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.545870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.545975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.546007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.546114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.546144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.546384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.546424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.546501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.546522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.546671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.546694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.546854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.546875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.547030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.547052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.547150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.547170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.547247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.547268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.547418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.256 [2024-10-14 16:53:31.547438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.256 qpair failed and we were unable to recover it. 00:28:27.256 [2024-10-14 16:53:31.547520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.547541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.547699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.547721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.547873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.547894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.548064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.548096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.548200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.548230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.548342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.548373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.548486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.548517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.548689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.548721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.548848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.548884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.549975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.549993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.550848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.550866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.551933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.551951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.552135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.552317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.552442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.552569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.552730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.552883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.552983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.553008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.553113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.553140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.553249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.553274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.257 qpair failed and we were unable to recover it. 00:28:27.257 [2024-10-14 16:53:31.553367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.257 [2024-10-14 16:53:31.553394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.553489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.553514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.553626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.553648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.553830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.553851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.554902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.554923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.555068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.555089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.555188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.555209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.555415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.555447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.555626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.555659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.555919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.555950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.556059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.556094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.556252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.556273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.556435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.556455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.556645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.556668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.556834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.556858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.556961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.556982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.557855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.557876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.558042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.558073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.558344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.558377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.558495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.558525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.558762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.558795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.558913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.558945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.559070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.559103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.559211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.559243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.559352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.559374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.559473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.559494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.559709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.559732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.258 qpair failed and we were unable to recover it. 00:28:27.258 [2024-10-14 16:53:31.559830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.258 [2024-10-14 16:53:31.559851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.560087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.560109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.560207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.560228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.560326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.560347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.560501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.560532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.560737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.560771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.560950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.560981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.561243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.561264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.561420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.561442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.561620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.561652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.561777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.561808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.561989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.562020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.562240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.562272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.562546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.562577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.562858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.562890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.259 [2024-10-14 16:53:31.563855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.259 [2024-10-14 16:53:31.563878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.259 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.563992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.564116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.564310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.564488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.564593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.564782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.564912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.564933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.565109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.565130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.565217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.565236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.565332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.565353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.565511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.565532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.565717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.565739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.565834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.565856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.566014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.260 [2024-10-14 16:53:31.566035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.260 qpair failed and we were unable to recover it. 00:28:27.260 [2024-10-14 16:53:31.566152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.566252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.566384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.566499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.566615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.566787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.566922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.566942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.567107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.567129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.567285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.567307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.567459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.567480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.567584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.567612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.567775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.567797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.567955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.567980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.261 [2024-10-14 16:53:31.568139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.261 [2024-10-14 16:53:31.568161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.261 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.568277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.568298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.568459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.568480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.568645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.568675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.568857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.568889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.568998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.569030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.569202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.569232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.569400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.569437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.569613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.569637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.569720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.569742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.569909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.569931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.570781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.570801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.571057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.571078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.571226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.571246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.571338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.571356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.571446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.571466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.571633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.571656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.262 [2024-10-14 16:53:31.571772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.262 [2024-10-14 16:53:31.571793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.262 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.571890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.571911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.571994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.572841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.572988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.573917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.573941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.574119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.574139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.574234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.263 [2024-10-14 16:53:31.574254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.263 qpair failed and we were unable to recover it. 00:28:27.263 [2024-10-14 16:53:31.574351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.574372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.574538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.574558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.574649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.574669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.574752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.574771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.574876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.574896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.574981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.575002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.575103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.575123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.264 [2024-10-14 16:53:31.575215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.264 [2024-10-14 16:53:31.575236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.264 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.575393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.575413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.575561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.575582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.575735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.575756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.575853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.575873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.576045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.576066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.576244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.576265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.576348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.576368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.576550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.576571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.576657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.576676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.576847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.576868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.577016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.577037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.577206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.577226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.265 [2024-10-14 16:53:31.577392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.265 [2024-10-14 16:53:31.577413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.265 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.577613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.577635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.577717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.577735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.577884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.577905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.578131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.578151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.578296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.578316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.578419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.578448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.578688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.578720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.578936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.578967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.579142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.579173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.579356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.579386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.266 qpair failed and we were unable to recover it. 00:28:27.266 [2024-10-14 16:53:31.579575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.266 [2024-10-14 16:53:31.579630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.579748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.579778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.579893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.579924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.580096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.580125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.580321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.580361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.580464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.580484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.580678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.580703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.580855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.580875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.580965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.580985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.581149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.581170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.581251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.267 [2024-10-14 16:53:31.581271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.267 qpair failed and we were unable to recover it. 00:28:27.267 [2024-10-14 16:53:31.581354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.581374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.581542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.581563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.581739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.581761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.581847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.581868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.581966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.581987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.582072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.582092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.582179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.582201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.582296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.582316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.582515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.582536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.582708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.582731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.582888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.582907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.268 qpair failed and we were unable to recover it. 00:28:27.268 [2024-10-14 16:53:31.583068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.268 [2024-10-14 16:53:31.583089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.583172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.583192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.583358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.583379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.583540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.583570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.583680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.583710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.583892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.583921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.584098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.584129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.584389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.584420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.584653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.584685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.584953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.584985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.585098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.585129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.585241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.585270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.585383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.585403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.585503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.585524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.585713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.585736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.585901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.585921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.586966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.586996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.587168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.587200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.587321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.587356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.587481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.587501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.587655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.587677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.587775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.587796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.587972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.587993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.588143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.269 [2024-10-14 16:53:31.588165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.269 qpair failed and we were unable to recover it. 00:28:27.269 [2024-10-14 16:53:31.588314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.270 [2024-10-14 16:53:31.588337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.270 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.588419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.588440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.588622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.588645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.588802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.588823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.588994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.589016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.589176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.589197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.589278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.589298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.589460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.589528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.589697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.589735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.589845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.589868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.590034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.590055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.271 qpair failed and we were unable to recover it. 00:28:27.271 [2024-10-14 16:53:31.590295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.271 [2024-10-14 16:53:31.590315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.590481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.590502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.590692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.590714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.590894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.590915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.591077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.591099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.591195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.591214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.591412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.591434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.591612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.591634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.591800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.591821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.272 [2024-10-14 16:53:31.591907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.272 [2024-10-14 16:53:31.591928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.272 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.592963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.592984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.593079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.593099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.273 [2024-10-14 16:53:31.593317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.273 [2024-10-14 16:53:31.593347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.273 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.593460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.593492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.593699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.593731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.593908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.593939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.594041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.594071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.594194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.594231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.594434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.594466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.594578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.594599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.594791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.594812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.274 [2024-10-14 16:53:31.595025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.274 [2024-10-14 16:53:31.595046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.274 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.595223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.595254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.595511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.595542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.595751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.595782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.595970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.596001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.596184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.596213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.596490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.596521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.596689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.596711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.596943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.596964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.597131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.597151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.597243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.275 [2024-10-14 16:53:31.597262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.275 qpair failed and we were unable to recover it. 00:28:27.275 [2024-10-14 16:53:31.597414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.597433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.597584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.597609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.597693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.597712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.597872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.597893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.597987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.598006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.598162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.598183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.598332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.598352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.598449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.598469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.598635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.276 [2024-10-14 16:53:31.598657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.276 qpair failed and we were unable to recover it. 00:28:27.276 [2024-10-14 16:53:31.598759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.598780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.598897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.598918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.599024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.599044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.599220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.599241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.599407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.599427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.599645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.599677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.599861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.599891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.600157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.600187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.600324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.600344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.600438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.600458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.600632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.600655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.600768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.600788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.277 qpair failed and we were unable to recover it. 00:28:27.277 [2024-10-14 16:53:31.600888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.277 [2024-10-14 16:53:31.600909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.601066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.601107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.601309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.601338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.601456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.601487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.601698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.601728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.601888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.601918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.602113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.602143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.602339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.602368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.602549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.602571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.278 [2024-10-14 16:53:31.602750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.278 [2024-10-14 16:53:31.602782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.278 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.602965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.602997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.603204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.603234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.603501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.603533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.603645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.603677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.603779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.603809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.603988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.604018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.604206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.604237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.604362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.604393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.279 qpair failed and we were unable to recover it. 00:28:27.279 [2024-10-14 16:53:31.604571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.279 [2024-10-14 16:53:31.604612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.604724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.604755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.604868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.604899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.605068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.605100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.605222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.605252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.605486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.605507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.605669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.605692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.605844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.605864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.605969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.605990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.606196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.606227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.606482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.606512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.606680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.606712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.606838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.606868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.606979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.280 [2024-10-14 16:53:31.607010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.280 qpair failed and we were unable to recover it. 00:28:27.280 [2024-10-14 16:53:31.607190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.607220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.607404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.607441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.607622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.607644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.607794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.607815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.607982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.608002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.608164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.608185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.608358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.608379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.608460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.608479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.281 qpair failed and we were unable to recover it. 00:28:27.281 [2024-10-14 16:53:31.608596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.281 [2024-10-14 16:53:31.608630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.608793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.608813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.608914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.608935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.609211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.609242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.609368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.609405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.609535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.609565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.609765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.609797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.609907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.609937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.610231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.610261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.610380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.610409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.610522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.610542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.282 qpair failed and we were unable to recover it. 00:28:27.282 [2024-10-14 16:53:31.610771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.282 [2024-10-14 16:53:31.610793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.610943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.610963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.611068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.611089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.611248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.611268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.611455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.611486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.611678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.611710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.611901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.611932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.612071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.283 [2024-10-14 16:53:31.612102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.283 qpair failed and we were unable to recover it. 00:28:27.283 [2024-10-14 16:53:31.612283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.612313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.612490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.612522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.612709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.612741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.612942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.612972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.613210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.613241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.613432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.613452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.613632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.613665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.613784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.613815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.613943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.613973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.614146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.614177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.614436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.614466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.614638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.614670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.614916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.614947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.615133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.615164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.615304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.615334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.615431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.615462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.615599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.615626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.615847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.615878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.616118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.616149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.616376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.616406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.616584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.616623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.616825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.616856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.617118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.617149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.617334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.617365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.617618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.617651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.617849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.617886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.618081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.618112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.618373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.618394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.618570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.618591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.618714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.618734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.284 [2024-10-14 16:53:31.618891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.284 [2024-10-14 16:53:31.618912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.284 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.619077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.619097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.619321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.619352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.619481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.619511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.619626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.619658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.619782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.619812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.620022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.620052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.620235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.620264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.620523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.620555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.620843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.620865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.620973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.620994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.285 [2024-10-14 16:53:31.621238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.285 [2024-10-14 16:53:31.621268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.285 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.621398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.621428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.621562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.621592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.621778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.621809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.621998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.622030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.622202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.622231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.622401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.622431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.622658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.622689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.622902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.622931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.623116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.623147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.286 [2024-10-14 16:53:31.623265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.286 [2024-10-14 16:53:31.623300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.286 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.623460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.623481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.623649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.623690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.623818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.623849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.624048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.624080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.624290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.624320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.624562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.624593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.624739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.624770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.624882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.624912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.625098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.625129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.625340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.625371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.625541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.625570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.625707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.625728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.287 [2024-10-14 16:53:31.625900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.287 [2024-10-14 16:53:31.625921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.287 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.626957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.626978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.627072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.627091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.627174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.627197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.627278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.288 [2024-10-14 16:53:31.627297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.288 qpair failed and we were unable to recover it. 00:28:27.288 [2024-10-14 16:53:31.627490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.627511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.627757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.627779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.627970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.627991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.628161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.628182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.628353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.628383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.628640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.628673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.628797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.628827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.628939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.628970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.629206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.629237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.289 [2024-10-14 16:53:31.629374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.289 [2024-10-14 16:53:31.629414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.289 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.629588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.629614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.629713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.629733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.629895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.629915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.630061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.630097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.630226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.630257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.630440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.630472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.630739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.630771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.630961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.630993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.290 [2024-10-14 16:53:31.631240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.290 [2024-10-14 16:53:31.631272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.290 qpair failed and we were unable to recover it. 00:28:27.291 [2024-10-14 16:53:31.631410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-10-14 16:53:31.631440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-10-14 16:53:31.631691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-10-14 16:53:31.631714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-10-14 16:53:31.631911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-10-14 16:53:31.631940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-10-14 16:53:31.632057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-10-14 16:53:31.632088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-10-14 16:53:31.632222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.291 [2024-10-14 16:53:31.632252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.291 qpair failed and we were unable to recover it. 00:28:27.291 [2024-10-14 16:53:31.632490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-10-14 16:53:31.632519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-10-14 16:53:31.632769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-10-14 16:53:31.632802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-10-14 16:53:31.633003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-10-14 16:53:31.633033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-10-14 16:53:31.633154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-10-14 16:53:31.633185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.292 [2024-10-14 16:53:31.633373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.292 [2024-10-14 16:53:31.633403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.292 qpair failed and we were unable to recover it. 00:28:27.293 [2024-10-14 16:53:31.633516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-10-14 16:53:31.633536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-10-14 16:53:31.633692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-10-14 16:53:31.633718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-10-14 16:53:31.633883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-10-14 16:53:31.633904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-10-14 16:53:31.634049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-10-14 16:53:31.634070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-10-14 16:53:31.634240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-10-14 16:53:31.634261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.293 qpair failed and we were unable to recover it. 00:28:27.293 [2024-10-14 16:53:31.634497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.293 [2024-10-14 16:53:31.634518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.634613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.634633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.634817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.634839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.634939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.634960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.635075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.635096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.635242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.635263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.635418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.635439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.294 qpair failed and we were unable to recover it. 00:28:27.294 [2024-10-14 16:53:31.635586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.294 [2024-10-14 16:53:31.635612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.635716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.635738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.635929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.635950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.636055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.636076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.636175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.636196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.636309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.636330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.636483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.636503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.636595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.295 [2024-10-14 16:53:31.636629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.295 qpair failed and we were unable to recover it. 00:28:27.295 [2024-10-14 16:53:31.636862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.296 [2024-10-14 16:53:31.636882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.296 qpair failed and we were unable to recover it. 00:28:27.296 [2024-10-14 16:53:31.637039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.296 [2024-10-14 16:53:31.637059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.296 qpair failed and we were unable to recover it. 00:28:27.296 [2024-10-14 16:53:31.637211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.296 [2024-10-14 16:53:31.637247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.296 qpair failed and we were unable to recover it. 00:28:27.296 [2024-10-14 16:53:31.637484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.296 [2024-10-14 16:53:31.637515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.296 qpair failed and we were unable to recover it. 00:28:27.296 [2024-10-14 16:53:31.637757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.296 [2024-10-14 16:53:31.637797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.296 qpair failed and we were unable to recover it. 00:28:27.296 [2024-10-14 16:53:31.637959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.296 [2024-10-14 16:53:31.637980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.296 qpair failed and we were unable to recover it. 00:28:27.296 [2024-10-14 16:53:31.638139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.638159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.638266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.638287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.638441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.638463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.638570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.638590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.638813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.638834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.638915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.638935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.639114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.639135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.639300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.639321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.639512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.639542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.297 [2024-10-14 16:53:31.639677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.297 [2024-10-14 16:53:31.639709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.297 qpair failed and we were unable to recover it. 00:28:27.298 [2024-10-14 16:53:31.639841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.298 [2024-10-14 16:53:31.639872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.298 qpair failed and we were unable to recover it. 00:28:27.298 [2024-10-14 16:53:31.639992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.298 [2024-10-14 16:53:31.640023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.298 qpair failed and we were unable to recover it. 00:28:27.298 [2024-10-14 16:53:31.640214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.298 [2024-10-14 16:53:31.640245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.298 qpair failed and we were unable to recover it. 00:28:27.298 [2024-10-14 16:53:31.640368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.298 [2024-10-14 16:53:31.640399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.298 qpair failed and we were unable to recover it. 00:28:27.298 [2024-10-14 16:53:31.640569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.298 [2024-10-14 16:53:31.640609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.298 qpair failed and we were unable to recover it. 00:28:27.298 [2024-10-14 16:53:31.640712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.298 [2024-10-14 16:53:31.640753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.298 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.641043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.641074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.641281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.641311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.641488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.641520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.641699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.641732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.641850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.641880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.642007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.642038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.642208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.642239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.299 [2024-10-14 16:53:31.642357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.299 [2024-10-14 16:53:31.642387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.299 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.642517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.642538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.642780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.642803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.642953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.642973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.643134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.643156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.643250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.643271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.643391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.643411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.300 [2024-10-14 16:53:31.643508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.300 [2024-10-14 16:53:31.643529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.300 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.643690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.643712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.643802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.643822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.643923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.643944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.644055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.644076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.644223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.644243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.644490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.644521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.644694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.644727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.644918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.301 [2024-10-14 16:53:31.644948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.301 qpair failed and we were unable to recover it. 00:28:27.301 [2024-10-14 16:53:31.645051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.302 [2024-10-14 16:53:31.645083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.302 qpair failed and we were unable to recover it. 00:28:27.302 [2024-10-14 16:53:31.645273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.302 [2024-10-14 16:53:31.645304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.302 qpair failed and we were unable to recover it. 00:28:27.302 [2024-10-14 16:53:31.645479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.302 [2024-10-14 16:53:31.645499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.302 qpair failed and we were unable to recover it. 00:28:27.302 [2024-10-14 16:53:31.645613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.302 [2024-10-14 16:53:31.645635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.302 qpair failed and we were unable to recover it. 00:28:27.302 [2024-10-14 16:53:31.645799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.302 [2024-10-14 16:53:31.645820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.302 qpair failed and we were unable to recover it. 00:28:27.302 [2024-10-14 16:53:31.646004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.302 [2024-10-14 16:53:31.646025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.646140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.646162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.646251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.646272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.646508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.646528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.646767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.646789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.646955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.646977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.647083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.647104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.647181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.647200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.647411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.647482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.647701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.303 [2024-10-14 16:53:31.647742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.303 qpair failed and we were unable to recover it. 00:28:27.303 [2024-10-14 16:53:31.647929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.647954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.648192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.648214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.648338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.648359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.648452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.648472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.648625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.648647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.648816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.648837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.648945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.648966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.304 [2024-10-14 16:53:31.649054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.304 [2024-10-14 16:53:31.649074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.304 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.649185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.649206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.649367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.649388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.649498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.649520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.649756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.649778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.649957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.649978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.650157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.650178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.650263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.650282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.650445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.650467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.650742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.650775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.651014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.651044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.651230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.651261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.651502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.651532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.651739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.651772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.651979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.305 [2024-10-14 16:53:31.652010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.305 qpair failed and we were unable to recover it. 00:28:27.305 [2024-10-14 16:53:31.652208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.306 [2024-10-14 16:53:31.652239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.306 qpair failed and we were unable to recover it. 00:28:27.306 [2024-10-14 16:53:31.652360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.306 [2024-10-14 16:53:31.652390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.306 qpair failed and we were unable to recover it. 00:28:27.306 [2024-10-14 16:53:31.652507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.306 [2024-10-14 16:53:31.652528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.307 qpair failed and we were unable to recover it. 00:28:27.307 [2024-10-14 16:53:31.652695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.307 [2024-10-14 16:53:31.652717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.307 qpair failed and we were unable to recover it. 00:28:27.307 [2024-10-14 16:53:31.652812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.307 [2024-10-14 16:53:31.652833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.307 qpair failed and we were unable to recover it. 00:28:27.307 [2024-10-14 16:53:31.652989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.307 [2024-10-14 16:53:31.653010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.307 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.653117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.653142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.653388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.653409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.653522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.653543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.653647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.653669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.653771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.653793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.653964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.653984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.654224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.654245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.654353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.654374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.654618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.654640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.654888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.654908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.655012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.655032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.655153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.655174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.655274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.655295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.655447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.655468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.655689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.655711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.655872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.655893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.656041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.308 [2024-10-14 16:53:31.656062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.308 qpair failed and we were unable to recover it. 00:28:27.308 [2024-10-14 16:53:31.656225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.656247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.656341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.656361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.656577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.656598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.656698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.656719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.656868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.656889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.656990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.657011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.657176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.657198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.657415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.657436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.657521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.657541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.657700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.657723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.657897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.657919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.658151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.658172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.658336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.658357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.658505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.658527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.658620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.658641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.658795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.658816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.658985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.659159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.659336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.659467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.659640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.659841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.659962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.659982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.660060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.660084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.660248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.660268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.660437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.660458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.660674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.660696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.660860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.660881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.661121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.661151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.661340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.661371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.661555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.661594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.661763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.661785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.661871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.661891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.309 [2024-10-14 16:53:31.661989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.309 [2024-10-14 16:53:31.662009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.309 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.662225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.662246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.662442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.662463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.662615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.662637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.662761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.662783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.662896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.662917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.663083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.663103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.663276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.663297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.663466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.663487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.663647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.663669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.663857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.663887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.664010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.664041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.664212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.664242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.664419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.664451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.664618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.664688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.664975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.665019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.665287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.665320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.665512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.665545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.665676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.665709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.665901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.665933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.666138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.666162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.666315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.666336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.666551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.666572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.666743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.666783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.666960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.666991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.667251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.667281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.667459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.667481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.667631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.667654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.667768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.667789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.667895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.667915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.668015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.668040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.310 qpair failed and we were unable to recover it. 00:28:27.310 [2024-10-14 16:53:31.668153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.310 [2024-10-14 16:53:31.668174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.668363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.668384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.668631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.668653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.668762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.668783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.668895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.668917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.669972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.669993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.670196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.670217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.670312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.670333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.670490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.670510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.670684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.670715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.670908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.670939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.671071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.671103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.671222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.671252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.671367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.671398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.671566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.671597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.671790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.671821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.672058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.672088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.672189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.672221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.672404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.672434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.672645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.672679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.672860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.672891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.673079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.673110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.673292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.673322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.673576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.673597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.673766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.673794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.673879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.311 [2024-10-14 16:53:31.673899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.311 qpair failed and we were unable to recover it. 00:28:27.311 [2024-10-14 16:53:31.674061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.674081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.674230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.674251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.674355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.674376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.674542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.674563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.674675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.674700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.674872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.674893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.675055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.675080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.675226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.675248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.675410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.675431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.675513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.675532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.675799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.675821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.675924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.675945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.676095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.676116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.676333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.676354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.676519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.676541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.676711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.676754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.676938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.676970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.677166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.677196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.677464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.677485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.677651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.677674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.677782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.677804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.677963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.677984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.678136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.678157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.678318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.678338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.678486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.678507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.678613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.678635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.678797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.678818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.679030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.679200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.679300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.679433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.679623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.679728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.679945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.680015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.680289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.312 [2024-10-14 16:53:31.680359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.312 qpair failed and we were unable to recover it. 00:28:27.312 [2024-10-14 16:53:31.680501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.680536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.680755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.680790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.681005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.681037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.681141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.681171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.681276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.681299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.681528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.681549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.681646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.681667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.681841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.681861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.682077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.682109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.682231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.682261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.682454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.682485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.682669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.682708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.682827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.682857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.683040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.683072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.683197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.683227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.683401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.683443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.683611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.683633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.683805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.683825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.683994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.684026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.684145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.684176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.684414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.684445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.684684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.684707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.313 qpair failed and we were unable to recover it. 00:28:27.313 [2024-10-14 16:53:31.684902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.313 [2024-10-14 16:53:31.684923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.685136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.685157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.685319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.685339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.685451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.685473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.685564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.685586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.685699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.685721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.685870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.685891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.686930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.686951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.687126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.687146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.687240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.687261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.687438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.687477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.687610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.687644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.687768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.687799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.687971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.688002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.688182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.688212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.688473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.688504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.688635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.688667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.688865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.688896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.688996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.689028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.689218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.689241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.689435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.689473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.689654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.689687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.689867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.689897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.314 [2024-10-14 16:53:31.690082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.314 [2024-10-14 16:53:31.690119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.314 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.690302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.690332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.690500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.690520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.690697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.690739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.690923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.690954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.691150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.691181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.691313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.691344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.691642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.691675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.691935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.691956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.692042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.692062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.692213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.692233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.692400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.692421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.692614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.692636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.692865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.692895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.693177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.693208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.693325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.693356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.693633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.693654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.693882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.693904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.694132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.694154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.694263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.694284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.694442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.694464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.694686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.694708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.694867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.694888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.695000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.695021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.695181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.695202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.695370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.695391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.695542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.695563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.695674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.695697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.695853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.695873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.696025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.696064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.696259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.315 [2024-10-14 16:53:31.696291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.315 qpair failed and we were unable to recover it. 00:28:27.315 [2024-10-14 16:53:31.696463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.696492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.696676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.696697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.696859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.696880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.696970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.696990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.697154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.697184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.697291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.697322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.697561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.697591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.697781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.697801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.697964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.697993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.698109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.698139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.698325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.698357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.698481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.698522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.698633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.698656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.698753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.698771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.698870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.698891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.699059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.699079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.699240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.699260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.699367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.699388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.699630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.699652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.699887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.699908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.699998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.700017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.700176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.700198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.700400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.700431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.700646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.700679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.700928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.700959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.701093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.701123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.701236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.701266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.701439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.701471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.701678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.701700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.316 qpair failed and we were unable to recover it. 00:28:27.316 [2024-10-14 16:53:31.701784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.316 [2024-10-14 16:53:31.701804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.702020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.702042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.702205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.702226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.702461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.702482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.702634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.702657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.702821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.702840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.703028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.703161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.703285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.703458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.703647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.703834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.703984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.704023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.704193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.704224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.704361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.704390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.704567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.704597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.704790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.704822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.705081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.705112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.705350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.705381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.705623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.705655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.705828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.705848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.705943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.705964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.706057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.706077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.706239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.706261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.706446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.706466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.706634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.706656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.706835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.706866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.706995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.707026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.707157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.707186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.317 [2024-10-14 16:53:31.707424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.317 [2024-10-14 16:53:31.707455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.317 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.707561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.707592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.707734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.707771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.707921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.707941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.708031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.708052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.708334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.708364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.708475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.708504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.708686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.708718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.708896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.708927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.709107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.709138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.709323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.709353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.709525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.709545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.709724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.709762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.710001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.710032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.710210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.710241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.710508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.710539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.710785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.710807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.711037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.711057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.711276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.711301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.711412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.711432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.711526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.711547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.711654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.711675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.711839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.711859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.712015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.712036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.712142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.712162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.712345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.712367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.712605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.712627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.712821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.712842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.713005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.713025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.713111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.713130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.713214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.713233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.713402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.318 [2024-10-14 16:53:31.713434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.318 qpair failed and we were unable to recover it. 00:28:27.318 [2024-10-14 16:53:31.713623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.713655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.713785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.713814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.713929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.713969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.714874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.714895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.715055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.715075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.715293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.715314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.715426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.715447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.715558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.715579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.715747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.715769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.715953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.715983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.716145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.716176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.716344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.716374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.716554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.716583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.716774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.716806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.319 [2024-10-14 16:53:31.717087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.319 [2024-10-14 16:53:31.717117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.319 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.717319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.717350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.717533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.717554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.717728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.717762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.717945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.717975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.718159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.718188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.718383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.718423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.718687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.718708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.718826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.718846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.719061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.719082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.719317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.719337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.719420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.719440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.719597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.719629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.719809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.719830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.720086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.720117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.720290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.720321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.720511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.720541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.720667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.720699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.720885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.720906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.721094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.721115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.721282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.721303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.721462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.721493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.721729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.721762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.721872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.721901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.722024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.722056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.722170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.722200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.722317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.722347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.722607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.722629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.722802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.722823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.722928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.722947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.723126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.723146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.723242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.723261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.320 qpair failed and we were unable to recover it. 00:28:27.320 [2024-10-14 16:53:31.723420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.320 [2024-10-14 16:53:31.723441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.723537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.723557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.723665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.723685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.723840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.723861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.724016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.724036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.724139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.724159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.724312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.724333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.724435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.724456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.724660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.724683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.724913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.724934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.725029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.725050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.725198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.725217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.725487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.725508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.725702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.725724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.725807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.725829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.725997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.726018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.726112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.726133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.726297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.726318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.726554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.726574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.726696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.726717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.726873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.726894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.726986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.727005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.727166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.727186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.727295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.727315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.727480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.727500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.727713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.727734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.727955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.727975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.728079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.728099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.728190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.728210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.728373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.728394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.728572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.728593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.728759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.728780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.728936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.728957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.321 qpair failed and we were unable to recover it. 00:28:27.321 [2024-10-14 16:53:31.729196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.321 [2024-10-14 16:53:31.729219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.729317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.729337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.729553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.729575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.729692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.729714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.729823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.729844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.730026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.730047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.730205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.730225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.730402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.730422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.730641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.730664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.730812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.730833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.730938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.730959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.731127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.731148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.731297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.731318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.731470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.731489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.731643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.731681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.731862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.731882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.732108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.732129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.732344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.732366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.732556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.732576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.732745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.732766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.732917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.732938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.733105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.733131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.733214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.733233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.733314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.733333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.733496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.733517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.733769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.733791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.733887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.733911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.734147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.734169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.734421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.734442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.734614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.734635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.734798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.734819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.735034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.735055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.735269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.735290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.735472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.735493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.735587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.735615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.322 qpair failed and we were unable to recover it. 00:28:27.322 [2024-10-14 16:53:31.735712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.322 [2024-10-14 16:53:31.735733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.735820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.735840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.736078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.736099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.736206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.736228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.736388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.736407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.736573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.736594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.736769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.736790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.737040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.737061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.737294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.737315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.737586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.737613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.737775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.737796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.737898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.737919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.738067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.738087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.738309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.738330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.738497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.738517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.738694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.738716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.738862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.738884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.739956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.739976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.740140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.740161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.740323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.740342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.740517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.740541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.740645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.740667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.740837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.740857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.740945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.740965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.741145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.741167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.741327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.741347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.741534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.741555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.741712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.741734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.741838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.741858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.741973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.741995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.742212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.742233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.742384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.742404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.323 [2024-10-14 16:53:31.742553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.323 [2024-10-14 16:53:31.742573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.323 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.742784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.742806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.742974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.742995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.743163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.743183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.743286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.743307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.743413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.743433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.743584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.743610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.743763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.743784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.743881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.743901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.744048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.744070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.744282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.744303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.744470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.744492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.744589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.744614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.744766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.744787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.744887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.744907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.745017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.745039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.745206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.745226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.745324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.745345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.745500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.745522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.745639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.745667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.745863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.745884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.746949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.746970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.324 [2024-10-14 16:53:31.747071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.324 [2024-10-14 16:53:31.747096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.324 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.747244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.747265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.747486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.747507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.747597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.747640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.747797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.747817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.747969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.747990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.748098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.748119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.748266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.748286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.748438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.748459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.748563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.748584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.748772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.748793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.748958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.748979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.749078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.749099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.749267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.749287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.749381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.749401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.749480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.749499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.749582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.749610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.749789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.749809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.750025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.750047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.750215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.750236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.750476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.750498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.750593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.750620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.750730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.750752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.750922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.750943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.751038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.751059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.751209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.751230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.751330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.751351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.751571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.751593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.751719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.751739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.751989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.752010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.752167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.752189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.752374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.752394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.752569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.752590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.752816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.752837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.752921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.752940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.753032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.753053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.753157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.753179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.753287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.325 [2024-10-14 16:53:31.753307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.325 qpair failed and we were unable to recover it. 00:28:27.325 [2024-10-14 16:53:31.753468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.753489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.753605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.753626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.753718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.753746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.753827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.753847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.754918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.754938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.755153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.755173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.755343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.755365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.755513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.755533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.755689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.755711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.755889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.755910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.756149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.756170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.756346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.756365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.756593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.756629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.756725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.756745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.756917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.756939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.757106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.757284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.757399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.757500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.757684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.757882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.757994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.758015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.758181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.758202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.758364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.758384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.758544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.758566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.758674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.758695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.758872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.758894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.758997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.759177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.759392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.759586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.759741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.759849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.759954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.326 [2024-10-14 16:53:31.759976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.326 qpair failed and we were unable to recover it. 00:28:27.326 [2024-10-14 16:53:31.760139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.760160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.760325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.760349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.760594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.760633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.760809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.760829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.760916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.760930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.761083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.761099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.761232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.761247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.761455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.761471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.761697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.761714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.761860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.761875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.761966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.761982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.762053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.762066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.762238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.762262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.762438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.762460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.762545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.762567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.762800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.762824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.762934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.762952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.763058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.763073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.763210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.763225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.763403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.763418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.763586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.763607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.763746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.763761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.763855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.763870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.764019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.764035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.764224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.764239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.764385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.764408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.764605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.764627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.764781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.764804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.764958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.764980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.765081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.765104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.765268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.765289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.765452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.765472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.765633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.765655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.765876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.765897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.766010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.766031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.766192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.766212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.766313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.766335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.766518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.766539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.766642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.766665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.766817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.766838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.767000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.767021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.767119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.327 [2024-10-14 16:53:31.767140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.327 qpair failed and we were unable to recover it. 00:28:27.327 [2024-10-14 16:53:31.767304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.767329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.767432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.767454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.767553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.767574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.767657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.767683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.767845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.767865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.768015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.768036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.768191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.768212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.768394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.768416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.768523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.768543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.768631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.768654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.768885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.768906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.769054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.769076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.769236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.769257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.769425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.769446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.769607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.769630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.769778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.769799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.769888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.769910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.770134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.770155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.770314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.770335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.770500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.770520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.770608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.770631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.770866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.770887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.771911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.771932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.772965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.772985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.773224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.773245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.773392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.773414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.773507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.773529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.773688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.773712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.328 qpair failed and we were unable to recover it. 00:28:27.328 [2024-10-14 16:53:31.773864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.328 [2024-10-14 16:53:31.773890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.329 qpair failed and we were unable to recover it. 00:28:27.329 [2024-10-14 16:53:31.774043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.329 [2024-10-14 16:53:31.774064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.329 qpair failed and we were unable to recover it. 00:28:27.329 [2024-10-14 16:53:31.774279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.329 [2024-10-14 16:53:31.774300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.329 qpair failed and we were unable to recover it. 00:28:27.329 [2024-10-14 16:53:31.774411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.329 [2024-10-14 16:53:31.774432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.774586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.774613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.774730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.774750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.774896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.774917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.775798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.775985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.776155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.776355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.776532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.776649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.776823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.776963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.776983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.777085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.777107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.777352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.777373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.777523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.777544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.777648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.777670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.777827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.777849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.777931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.777951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.778171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.778193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.778292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.778313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.778398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.778420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.778619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.778642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.778746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.778766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.778864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.778885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.779043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.779063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.779321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.779342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.330 [2024-10-14 16:53:31.779439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.330 [2024-10-14 16:53:31.779460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.330 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.779646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.779669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.779767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.779788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.780924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.780946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.781046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.781067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.781159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.781180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.781394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.781416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.781571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.781592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.781761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.781783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.781975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.781997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.782078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.782102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.782251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.782272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.782434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.782455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.782676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.782699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.782780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.782799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.782947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.782968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.783123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.783144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.783413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.783434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.783545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.783567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.783733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.783754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.783853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.783875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.783955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.783978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.784067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.784089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.784308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.784330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.784434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.784456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.784539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.784559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.784741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.331 [2024-10-14 16:53:31.784764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.331 qpair failed and we were unable to recover it. 00:28:27.331 [2024-10-14 16:53:31.784865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.784886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.785924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.785945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.786189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.786302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.786406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.786515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.786708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.786897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.786995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.787109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.787279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.787523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.787640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.787760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.787881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.787902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.788061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.788083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.788180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.788201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.788298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.788320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.788470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.788491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.788713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.788735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.788903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.788924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.789004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.789026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.789243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.789264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.789447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.789469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.789622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.789644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.789740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.789761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.789871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.789892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.790108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.790129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.332 qpair failed and we were unable to recover it. 00:28:27.332 [2024-10-14 16:53:31.790228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.332 [2024-10-14 16:53:31.790249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.790352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.790373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.790467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.790488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.790597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.790627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.790723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.790744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.790847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.790873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.790975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.790995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.791153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.791174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.791329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.791350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.791631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.791653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.791747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.791768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.791879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.791901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.792122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.792143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.792248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.792268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.792445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.792466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.792641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.792662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.792814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.792835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.793075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.793096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.793181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.793202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.793376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.793397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.793567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.793589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.793701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.793723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.793938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.793959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.794053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.794074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.794269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.794290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.794374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.794395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.794501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.794521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.794677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.794708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.794791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.794813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.795054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.795074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.795315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.795337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.795493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.795513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.795737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.795759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.795942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.795963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.796048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.796068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.796231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.796251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.333 [2024-10-14 16:53:31.796409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.333 [2024-10-14 16:53:31.796430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.333 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.796526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.796548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.796712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.796735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.796900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.796921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.797077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.797098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.797203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.797224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.797363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.797434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.797578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.797649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.797834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.797866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.797959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.797988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.798154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.798175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.798352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.798374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.798539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.798561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.798733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.798755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.798857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.798878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.798973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.798994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.799235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.799256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.799422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.799442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.799628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.799651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.799752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.799772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.799881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.799903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.800956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.800976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.801092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.801209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.801317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.801492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.801664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.801835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.801987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.334 [2024-10-14 16:53:31.802008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.334 qpair failed and we were unable to recover it. 00:28:27.334 [2024-10-14 16:53:31.802151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.802171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.802340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.802360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.802466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.802487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.802638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.802661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.802758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.802779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.802878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.802899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.803050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.803070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.803224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.803245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.803438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.803459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.803626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.803648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.803839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.803859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.803969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.803991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.804105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.804275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.804410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.804519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.804689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.804827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.804982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.805819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.805991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.806128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.806317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.806495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.806670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.806786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.806969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.806990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.807233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.807253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.807335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.807355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.807523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.807544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.807632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.807655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.807896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.807916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.808012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.335 [2024-10-14 16:53:31.808033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.335 qpair failed and we were unable to recover it. 00:28:27.335 [2024-10-14 16:53:31.808128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.808149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.808297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.808317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.808420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.808441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.808542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.808564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.808742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.808763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.808862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.808882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.809060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.809081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.809168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.809188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.809278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.809299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.809543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.809564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.809731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.809752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.809849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.809869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.810018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.810040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.810255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.810275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.810442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.810463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.810545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.810565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.810822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.810848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.811100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.811122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.811288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.811309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.811460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.811481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.811592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.811619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.811803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.811824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.811907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.811928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.812040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.812062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.812156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.812177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.812416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.812437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.812534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.812555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.812653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.812675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.812840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.812861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.813024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.813045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.813225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.813247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.813489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.813510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.813592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.813619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.813803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.813824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.813989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.814009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.814171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.814192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.814363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.814383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.814538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.814559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.814821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.814843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.814939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.814959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.815909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.815930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.816028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.816048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.816198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.816219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.816330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.816351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.816521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.816541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.816622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.816643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.816818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.816839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.817059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.817079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.817227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.817248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.817397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.817417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.817680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.817706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.817813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.817833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.817916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.817936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.818083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.818104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.818266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.818287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.818446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.336 [2024-10-14 16:53:31.818466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.336 qpair failed and we were unable to recover it. 00:28:27.336 [2024-10-14 16:53:31.818581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.818606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.818831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.818852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.819119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.819139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.819247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.819268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.819379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.819399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.819553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.819573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.819750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.819772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.819937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.819958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.820122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.820143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.820237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.820258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.820350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.820370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.820456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.820476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.820635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.820656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.820826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.820847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.821073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.821094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.821196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.821217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.821309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.821330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.821562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.821583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.821767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.821788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.821974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.821994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.822164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.822185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.822334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.822355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.822572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.822594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.822783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.822805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.822892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.822913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.823098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.823119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.823284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.823305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.823413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.823434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.823580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.823619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.823793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.823814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.823914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.823935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.824126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.824148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.824317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.824339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.824580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.824607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.824711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.824737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.824860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.824880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.824971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.824992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.825148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.825169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.825332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.825354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.825460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.825480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.825650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.825672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.825852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.825873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.825956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.825977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.826955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.826976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.827061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.827082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.827243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.827264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.827416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.827437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.827589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.827622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.827773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.827793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.827962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.827983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.828079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.828100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.828265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.828286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.828457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.337 [2024-10-14 16:53:31.828479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.337 qpair failed and we were unable to recover it. 00:28:27.337 [2024-10-14 16:53:31.828561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.828581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.828754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.828822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.829117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.829152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.829339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.829362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.829468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.829489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.829661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.829679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.829820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.829834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.829979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.829993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.830980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.830998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.831153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.831174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.831388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.831409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.831506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.831526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.831681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.831703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.831794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.831811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.831954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.831969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.832960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.832972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.833953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.833972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.834079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.834327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.834432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.834660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.834830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.834927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.834994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.835134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.835311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.835405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.835504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.835719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.835928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.835948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.836115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.836133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.836214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.836227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.836364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.836383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.836463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.836482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.836582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.836596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.338 qpair failed and we were unable to recover it. 00:28:27.338 [2024-10-14 16:53:31.836684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.338 [2024-10-14 16:53:31.836697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.836844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.836859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.836998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.837922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.837945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.838045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.838070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.838178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.838201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.838406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.838432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.838531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.838554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.838727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.838751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.838942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.838967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.839094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.839118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.839288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.839313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.839488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.839511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.839671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.839696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.839859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.839883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.840034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.840058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.840212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.840237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.840404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.840429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.840539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.840562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.840790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.840861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.841132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.841167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.841434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.841467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.841584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.841625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.841823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.841855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.842097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.842128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.842386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.842418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.842613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.842647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.842867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.842898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.843032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.843064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.843255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.843285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.843540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.843570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.843784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.843816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.844080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.844111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.844313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.844343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.844615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.844650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.844833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.844865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.845046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.845076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.845254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.845283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.845536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.845553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.845719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.845737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.845968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.845984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.846140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.846158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.846316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.846341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.846506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.846531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.846644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.846669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.846790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.846813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.847023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.847057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.847236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.847267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.847456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.847487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.847752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.847785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.847974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.848006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.848195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.848227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.848492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.848516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.848672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.848689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.339 [2024-10-14 16:53:31.848836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.339 [2024-10-14 16:53:31.848851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.339 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.849912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.849933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.850016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.850036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.850198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.850216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.850378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.850392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.850531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.850545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.850635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.850650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.850819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.850833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.851950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.851969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.852074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.852095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.852269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.852294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.852399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.852414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.852566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.852580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.852716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.852732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.853905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.853926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.854078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.854101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.854216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.854237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.854434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.854456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.854578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.854596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.854700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.854714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.854867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.854883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.855922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.855942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.856095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.856118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.856288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.856309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.856407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.856430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.856534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.856555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.340 [2024-10-14 16:53:31.856664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.340 [2024-10-14 16:53:31.856686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.340 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.856927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.856946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.857972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.857990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.858152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.858179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.858298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.858324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.858444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.858472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.858654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.858683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.858942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.858972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.859080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.859106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.859209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.859239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.859411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.859439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.859666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.859694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.859952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.859988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.860100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.860127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.860304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.860334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.623 [2024-10-14 16:53:31.860519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.623 [2024-10-14 16:53:31.860547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.623 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.860667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.860694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.860925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.860955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.861069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.861095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.861276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.861304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.861490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.861519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.861635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.861662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.861769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.861798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.862028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.862058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.862160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.862187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.862464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.862495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.862717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.862748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.862967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.862996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.863133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.863160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.863257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.863283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.863459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.863487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.863617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.863647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.863880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.863908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.864081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.864109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.864292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.864320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.864482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.864509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.864741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.864774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.864897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.864925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.865107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.865134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.865321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.865350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.865531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.865559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.865726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.865754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.865882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.865908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.866107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.866135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.866389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.866418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.866519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.866545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.866776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.866804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.866918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.866945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.867129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.867151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.867319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.867343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.867503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.867525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.867746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.867771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.867937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.867963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.868114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.868134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.868290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.868305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.868454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.624 [2024-10-14 16:53:31.868469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.624 qpair failed and we were unable to recover it. 00:28:27.624 [2024-10-14 16:53:31.868611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.868626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.868800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.868814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.868962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.868977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.869128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.869142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.869241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.869254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.869428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.869442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.869591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.869619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.869722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.869743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.869920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.869942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.870885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.870899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.871845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.871868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.872096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.872118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.872223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.872245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.872414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.872435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.872584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.872606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.872787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.872803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.872885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.872898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.873847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.873862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.874043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.874075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.874188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.874209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.874388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.874411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.874568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.625 [2024-10-14 16:53:31.874590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.625 qpair failed and we were unable to recover it. 00:28:27.625 [2024-10-14 16:53:31.874716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.874737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.874843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.874861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.874946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.874960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.875110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.875125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.875274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.875289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.875374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.875387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.875592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.875632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.875781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.875796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.875897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.875911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.876070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.876085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.876236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.876257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.876429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.876451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.876618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.876644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.876814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.876835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.876934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.876958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.877066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.877081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.877223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.877241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.877402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.877422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.877596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.877621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.877729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.877747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.877838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.877857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.878065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.878083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.878160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.878178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.878347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.878374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.878576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.878611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.878708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.878734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.878970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.878999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.879182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.879209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.879385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.879412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.879582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.879636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.879824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.879851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.879970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.880000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.880260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.880288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.880460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.880489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.880726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.880756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.880921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.880949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.881232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.881266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.881442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.881471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.881606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.881635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.881758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.881786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.626 [2024-10-14 16:53:31.882044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.626 [2024-10-14 16:53:31.882073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.626 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.882247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.882275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.882453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.882481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.882614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.882643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.882774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.882803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.882992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.883205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.883350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.883489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.883685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.883839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.883967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.883994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.884185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.884213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.884376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.884405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.884638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.884667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.884843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.884873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.884997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.885025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.885217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.885245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.885530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.885559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.885772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.885802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.885941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.885968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.886098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.886125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.886285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.886314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.886498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.886526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.886649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.886678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.886841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.886869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.886984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.887012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.887217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.887244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.887410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.887436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.887533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.887560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.887801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.887825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.887994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.888012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.888218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.888234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.888301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.888315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.627 [2024-10-14 16:53:31.888468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.627 [2024-10-14 16:53:31.888483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.627 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.888624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.888640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.888812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.888830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.889032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.889047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.889123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.889138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.889343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.889357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.889521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.889543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.889726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.889749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.889898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.889921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.890956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.890970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.891110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.891124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.891213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.891228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.891376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.891391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.891598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.891633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.891852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.891874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.891976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.891999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.892964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.892978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.893131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.893146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.893299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.893313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.893529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.893544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.893625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.893640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.893706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.893719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.893953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.893976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.894156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.894178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.894361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.894383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.894550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.894573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.894728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.894746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.894823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.894836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.894974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.628 [2024-10-14 16:53:31.894989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.628 qpair failed and we were unable to recover it. 00:28:27.628 [2024-10-14 16:53:31.895071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.895085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.895293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.895312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.895539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.895555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.895655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.895670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.895848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.895862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.895959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.895980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.896074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.896095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.896246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.896268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.896357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.896376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.896616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.896639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.896858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.896878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.897867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.897882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.898038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.898173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.898302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.898498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.898704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.898859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.898996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.899024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.899261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.899292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.899533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.899564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.899780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.899812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.899988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.900037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.900225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.900249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.900418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.900439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.900545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.900566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.900790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.900813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.900921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.900942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.901158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.901179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.901284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.901305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.901490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.901512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.901675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.901697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.901805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.901826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.902040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.902061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.629 [2024-10-14 16:53:31.902229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.629 [2024-10-14 16:53:31.902251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.629 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.902498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.902526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.902778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.902799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.902882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.902902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.903070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.903091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.903180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.903201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.903296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.903317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.903410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.903431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.903646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.903668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.903902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.903924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.904026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.904048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.904262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.904283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.904383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.904404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.904516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.904538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.904798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.904820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.904933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.904955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.905977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.905997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.906101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.906122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.906368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.906389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.906555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.906576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.906748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.906770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.906854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.906875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.907058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.907079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.907185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.907206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.907374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.907394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.907492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.907513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.907664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.907687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.907833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.907855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.908011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.908032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.908215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.908237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.908396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.908417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.908506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.908527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.908686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.908708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.908928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.908949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.909112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.630 [2024-10-14 16:53:31.909133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.630 qpair failed and we were unable to recover it. 00:28:27.630 [2024-10-14 16:53:31.909290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.909316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.909468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.909489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.909648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.909671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.909846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.909867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.910027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.910048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.910213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.910233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.910354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.910375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.910459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.910480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.910654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.910676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.910843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.910864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.911022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.911044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.911157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.911179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.911352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.911373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.911477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.911498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.911681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.911703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.911872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.911894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.912124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.912145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.912234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.912255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.912474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.912495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.912714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.912736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.912977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.912999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.913090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.913111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.913214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.913236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.913326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.913346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.913573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.913594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.913762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.913783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.914003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.914025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.914187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.914208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.914356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.914377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.914458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.914478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.914643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.914665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.914830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.914851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.915098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.915119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.915277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.915299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.915464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.915485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.915590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.915618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.915726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.915748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.915913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.915933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.916083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.916104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.916264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.916285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.631 qpair failed and we were unable to recover it. 00:28:27.631 [2024-10-14 16:53:31.916461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.631 [2024-10-14 16:53:31.916486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.916644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.916667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.916896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.916916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.917959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.917980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.918159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.918180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.918274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.918295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.918451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.918472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.918647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.918670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.918789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.918811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.918967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.918988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.919072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.919093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.919190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.919212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.919305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.919325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.919480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.919501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.919582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.919625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.919791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.919813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.920029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.920050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.920264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.920285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.920450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.920470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.920685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.920708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.920876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.920898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.921911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.921924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.922153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.922166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.922295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.922309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.922443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.922456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.632 qpair failed and we were unable to recover it. 00:28:27.632 [2024-10-14 16:53:31.922667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.632 [2024-10-14 16:53:31.922681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.922758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.922771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.922920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.922939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.923029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.923053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.923221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.923240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.923406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.923426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.923512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.923532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.923685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.923703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.923885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.923899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.924917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.924931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.925071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.925090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.925243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.925262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.925436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.925456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.925682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.925702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.925924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.925939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.926895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.926907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.927112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.927130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.927364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.927384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.927552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.927572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.927675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.927695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.927896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.927913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.633 [2024-10-14 16:53:31.928977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.633 [2024-10-14 16:53:31.928993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.633 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.929135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.929152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.929250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.929274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.929443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.929478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.929582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.929613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.929785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.929811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.929985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.930010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.930166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.930192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.930363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.930390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.930480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.930505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.930695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.930722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.931006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.931032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.931143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.931169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.931343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.931369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.931475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.931499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.931631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.931658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.931760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.931784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.932046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.932071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.932163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.932188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.932416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.932442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.932623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.932652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.932764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.932789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.932957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.932983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.933158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.933185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.933415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.933440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.933665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.933692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.933861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.933888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.934068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.934094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.934207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.934231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.934395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.934420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.934517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.934546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.934772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.934799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.934960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.934986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.935102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.935126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.935229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.935254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.935411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.935437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.935551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.935575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.935746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.935795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.936031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.936055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.936208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.936230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.936391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.634 [2024-10-14 16:53:31.936413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.634 qpair failed and we were unable to recover it. 00:28:27.634 [2024-10-14 16:53:31.936583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.936616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.936723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.936744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.936909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.936930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.937092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.937114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.937279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.937299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.937485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.937506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.937587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.937616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.937776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.937798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.937973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.937994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.938189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.938211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.938477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.938498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.938595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.938624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.938732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.938754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.938864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.938886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.938978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.938999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.939169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.939190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.939426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.939447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.939604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.939634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.939761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.939782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.939951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.939971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.940137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.940158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.940308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.940331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.940491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.940505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.940663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.940678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.940829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.940844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.940994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.941008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.941153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.941168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.941382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.941396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.941480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.941493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.941688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.941712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.941801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.941821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.942066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.942088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.942265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.942285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.942433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.942452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.942598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.942618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.942715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.942728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.942879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.942893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.943047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.943060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.943297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.943311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.635 [2024-10-14 16:53:31.943509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.635 [2024-10-14 16:53:31.943523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.635 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.943623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.943637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.943725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.943738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.943894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.943916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.944084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.944104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.944276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.944297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.944462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.944482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.944568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.944583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.944722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.944736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.944815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.944828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.945047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.945060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.945236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.945249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.945328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.945342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.945541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.945556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.945772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.945786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.945935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.945953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.946100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.946121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.946277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.946298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.946477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.946497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.946657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.946678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.946866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.946881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.947884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.947896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.948109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.948131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.948301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.948321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.948409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.948433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.948581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.948608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.948712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.948731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.948845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.948871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.949119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.949137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.949357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.949375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.949541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.636 [2024-10-14 16:53:31.949559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.636 qpair failed and we were unable to recover it. 00:28:27.636 [2024-10-14 16:53:31.949717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.949736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.949952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.949970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.950083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.950100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.950261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.950287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.950452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.950479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.950706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.950734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.950828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.950855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.951043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.951069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.951323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.951352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.951548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.951575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.951768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.951791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.951917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.951940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.952100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.952121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.952236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.952257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.952355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.952377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.952529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.952551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.952648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.952670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.952830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.952852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.953067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.953088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.953242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.953263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.953381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.953403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.953579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.953605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.953714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.953736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.953890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.953912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.954092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.954114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.954212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.954233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.954446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.954468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.954565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.954586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.954748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.954770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.954869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.954890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.955078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.955100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.955262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.955283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.955437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.955459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.955625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.955652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.955850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.955871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.955975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.955996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.956167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.956188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.956288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.956310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.956432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.956453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.956619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.637 [2024-10-14 16:53:31.956641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.637 qpair failed and we were unable to recover it. 00:28:27.637 [2024-10-14 16:53:31.956742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.956763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.956859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.956881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.957097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.957117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.957220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.957242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.957391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.957413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.957515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.957536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.957625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.957647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.957877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.957898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.958133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.958154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.958260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.958281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.958387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.958409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.958555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.958576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.958743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.958765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.958912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.958933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.959100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.959121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.959294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.959315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.959488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.959509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.959595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.959625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.959815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.959837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.959991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.960012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.960161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.960183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.960286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.960306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.960485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.960514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.960694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.960720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.960876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.960897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.961922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.961936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.962890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.962911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.638 qpair failed and we were unable to recover it. 00:28:27.638 [2024-10-14 16:53:31.963075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.638 [2024-10-14 16:53:31.963096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.963190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.963212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.963314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.963335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.963437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.963458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.963652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.963672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.963816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.963831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.963988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.964963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.964978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.965969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.965985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.966132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.966146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.966300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.966314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.966394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.966407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.966552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.966566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.966720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.966736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.966802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.966815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.967960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.967980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.968089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.968105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.968242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.968256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.968331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.968344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.968477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.968492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.968587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.968606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.639 [2024-10-14 16:53:31.968691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.639 [2024-10-14 16:53:31.968704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.639 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.968836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.968850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.968979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.968994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.969073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.969085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.969165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.969178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.969417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.969439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.969589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.969614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.969796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.969826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.969994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.970022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.970141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.970167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.970422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.970452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.970617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.970646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.970763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.970791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.971018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.971047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.971239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.971267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.971428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.971458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.971583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.971852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.972143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.972174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.972348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.972377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.972653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.972686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.972930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.972958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.973194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.973224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.973339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.973367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.973625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.973670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.973904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.973933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.974119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.974148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.974328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.974356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.974520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.974547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.974778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.974810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.974939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.974966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.975148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.975175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.975408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.975436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.975693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.975724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.975917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.975952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.976064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.976091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.976267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.976297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.976467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.640 [2024-10-14 16:53:31.976495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.640 qpair failed and we were unable to recover it. 00:28:27.640 [2024-10-14 16:53:31.976615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.976643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.976819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.976848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.977027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.977055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.977240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.977268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.977530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.977559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.977670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.977698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.977805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.977832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.977952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.977981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.978221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.978250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.978371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.978398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.978532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.978560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.978691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.978719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.978912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.978942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.979131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.979153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.979369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.979387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.979643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.979665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.979768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.979787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.979943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.979956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.980956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.980974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.981928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.981940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.982833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.982998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.983017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.641 [2024-10-14 16:53:31.983196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.641 [2024-10-14 16:53:31.983213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.641 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.983283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.983294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.983435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.983448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.983592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.983609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.983673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.983685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.983816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.983828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.983969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.983983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.984978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.984998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.985184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.985205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.985347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.985361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.985511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.985525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.985614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.985627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.985773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.985786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.985857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.985868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.986964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.986982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.987889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.987901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.988099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.988197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.988344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.988416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.988520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.642 [2024-10-14 16:53:31.988664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.642 qpair failed and we were unable to recover it. 00:28:27.642 [2024-10-14 16:53:31.988749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.988766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.988922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.988941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.989907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.989923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.990083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.990261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.990500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.990611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.990792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.990894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.990993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.991177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.991333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.991437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.991591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.991756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.991949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.991966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.992120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.992136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.992278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.992295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.992445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.992462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.992557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.992573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.992737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.992754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.992858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.992874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.993840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.993989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.994009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.994087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.994103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.994321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.994338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.994478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.994495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.994646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.994663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.643 qpair failed and we were unable to recover it. 00:28:27.643 [2024-10-14 16:53:31.994839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.643 [2024-10-14 16:53:31.994869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.995063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.995094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.995231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.995275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.995434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.995451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.995591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.995652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.995783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.995814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.996075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.996106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.996235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.996265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.996397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.996414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.996555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.996572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.996748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.996764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.996995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.997011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.997114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.997130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.997341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.997358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.997493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.997509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.997696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.997714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.997931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.997961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.998074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.998104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.998234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.998264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.998450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.998481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.998674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.998692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.998886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.998902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.999057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.999073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.999145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.999161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.999342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.999371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.999557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.999586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.999810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:31.999840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:31.999968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.000008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.000116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.000138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.000241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.000263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.000428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.000450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.000611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.000632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.000925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.000947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.001054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.001076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.001231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.001253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.001486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.001513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.644 qpair failed and we were unable to recover it. 00:28:27.644 [2024-10-14 16:53:32.001666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.644 [2024-10-14 16:53:32.001690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.001808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.001830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.001998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.002021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.002128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.002149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.002250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.002272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.002494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.002525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.002638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.002669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.002771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.002801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.003034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.003065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.003246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.003276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.003468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.003499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.003640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.003663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.003817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.003840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.004028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.004059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.004231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.004262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.004469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.004500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.004699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.004722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.004828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.004850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.005006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.005028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.005268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.005290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.005392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.005414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.005573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.005596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.005707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.005729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.005896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.005920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.006089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.006112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.006213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.006236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.006401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.006424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.006589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.006620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.006791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.006822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.007001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.007031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.007299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.007330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.007567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.007589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.007733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.007757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.008001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.008032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.008217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.008248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.008506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.008544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.008703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.008727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.008813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.008836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.009027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.009049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.009288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.645 [2024-10-14 16:53:32.009315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.645 qpair failed and we were unable to recover it. 00:28:27.645 [2024-10-14 16:53:32.009487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.009510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.009610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.009633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.009800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.009823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.010046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.010077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.010262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.010292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.010477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.010508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.010648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.010678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.010895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.010922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.011134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.011162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.011362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.011391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.011512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.011539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.011739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.011769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.011957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.011985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.012101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.012130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.012261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.012289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.012417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.012446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.012740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.012773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.012873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.012904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.013093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.013124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.013312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.013342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.013581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.013621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.013743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.013773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.013905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.013936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.014132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.014163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.014349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.014380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.014568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.014598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.014848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.014879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.014998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.015029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.015165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.015194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.015451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.015483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.015660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.015689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.015806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.015834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.015942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.015970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.016157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.016185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.016351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.016378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.016633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.016663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.016772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.016798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.016981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.017008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.017126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.017153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.017385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.017418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.646 qpair failed and we were unable to recover it. 00:28:27.646 [2024-10-14 16:53:32.017651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.646 [2024-10-14 16:53:32.017681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.017916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.017943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.018133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.018161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.018408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.018435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.018614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.018643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.018830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.018857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.019031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.019058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.019241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.019269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.019382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.019409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.019644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.019674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.019927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.019955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.020122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.020152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.020333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.020364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.020626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.020659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.020829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.020858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.021040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.021070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.021240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.021270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.021377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.021409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.021599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.021637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.021812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.021842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.022088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.022118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.022247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.022277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.022538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.022569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.022703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.022734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.022993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.023024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.023145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.023176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.023419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.023489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.023639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.023677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.023881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.023915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.024041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.024073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.024256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.024287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.024482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.024514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.024706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.024740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.024924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.024955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.025151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.025183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.647 [2024-10-14 16:53:32.025371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.647 [2024-10-14 16:53:32.025402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.647 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.025526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.025558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.025746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.025778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.025954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.025986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.026248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.026278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.026507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.026539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.026670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.026702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.026905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.026936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.027057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.027089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.027215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.027246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.027486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.027517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.027696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.027730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.027912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.027943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.028178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.028209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.028341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.028373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.028545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.028576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.028783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.028815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.029020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.029052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.029240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.029271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.029450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.029482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.029660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.029693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.029877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.029909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.030152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.030183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.030307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.030339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.030581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.030620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.030813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.030844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.031018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.031049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.031303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.031334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.031451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.031482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.031662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.031695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.031871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.031903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.032090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.032128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.032252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.032283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.032528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.032559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.032738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.032772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.032889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.032921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.033095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.033127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.033352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.033382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.033560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.033592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.033718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.033748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.033938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-10-14 16:53:32.033969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.648 qpair failed and we were unable to recover it. 00:28:27.648 [2024-10-14 16:53:32.034079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.034110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.034375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.034407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.034527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.034557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.034742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.034775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.034966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.034997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.035180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.035211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.035397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.035427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.035598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.035649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.035871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.035901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.036093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.036124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.036234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.036265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.036382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.036413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.036582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.036625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.036801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.036833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.037038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.037069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.037266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.037297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.037466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.037496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.037615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.037648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.037825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.037856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.038091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.038122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.038307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.038338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.038457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.038487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.038612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.038645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.038848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.038878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.039116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.039147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.039340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.039371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.039581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.039631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.039804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.039835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.040111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.040143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.040355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.040386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.040641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.040681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.040858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-10-14 16:53:32.040888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.649 qpair failed and we were unable to recover it. 00:28:27.649 [2024-10-14 16:53:32.041004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.041034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.041208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.041239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.041412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.041443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.041620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.041651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.041894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.041927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.042170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.042202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.042342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.042373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.042499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.042531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.042772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.042805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.043012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.043043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.043169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.043200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.043325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.043356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.043475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.043507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.043768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.043801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.043935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.043968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.044158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.044189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.044413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.044444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.044628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.044661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.044940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.044971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.045305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.045337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.045447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.045479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.045741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.045774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.046020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.046051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.046313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.046345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.046593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.046633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.046773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.046805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.047066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.047097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.047275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.047306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.047432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.047462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.047653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.047685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.047946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.047978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.048215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.048246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.048378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.048410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.048541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.048572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.048701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.048733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.048940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.048972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.049156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.049187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.049425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.049457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.049632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.049670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.650 [2024-10-14 16:53:32.049868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-10-14 16:53:32.049899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.650 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.050078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.050109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.050380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.050412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.050670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.050703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.050899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.050930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.051118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.051150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.051281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.051311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.051433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.051464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.051645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.051679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.051889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.051920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.052100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.052132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.052311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.052342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.052529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.052560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.052747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.052779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.053044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.053076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.053314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.053344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.053523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.053554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.053830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.053862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.053990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.054022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.054161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.054191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.054383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.054415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.054521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.054551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.054760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.054793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.055033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.055065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.055185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.055216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.055330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.055361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.055615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.055649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.055836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.055868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.056105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.056136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.056241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.056272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.056392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.056423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.056661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.056694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.056865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.056895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.057132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.057163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.057379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.057410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.057619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.057652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.651 qpair failed and we were unable to recover it. 00:28:27.651 [2024-10-14 16:53:32.057837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.651 [2024-10-14 16:53:32.057867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.058109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.058141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.058333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.058365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.058567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.058610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.058745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.058777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.058908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.058937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.059131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.059162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.059347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.059377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.059633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.059667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.059913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.059944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.060184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.060215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.060407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.060438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.060627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.060660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.060903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.060934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.061119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.061150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.061391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.061421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.061591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.061631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.061754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.061784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.061957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.061988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.062176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.062207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.062379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.062411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.062620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.062651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.062836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.062868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.063055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.063086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.063257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.063287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.063476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.063507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.063752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.063785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.063914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.063944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.064137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.064169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.064417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.064448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.064633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.064667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.064848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.064880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.065001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.065032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.065239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.065270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.065452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.065483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.065618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.065650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.065835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.065867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.652 qpair failed and we were unable to recover it. 00:28:27.652 [2024-10-14 16:53:32.066041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.652 [2024-10-14 16:53:32.066071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.066252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.066284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.066496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.066528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.066796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.066829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.066964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.066996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.067188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.067220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.067394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.067436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.067572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.067621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.067800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.067830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.068034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.068066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.068189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.068219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.068402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.068434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.068541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.068572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.068805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.068838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.068966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.068996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.069182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.069214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.069337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.069368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.069617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.069649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.069916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.069948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.070078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.070109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.070284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.070316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.070506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.070537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.070788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.070821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.070937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.070968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.071092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.071123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.071248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.071278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.071457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.071488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.071669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.071702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.071802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.071832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.071984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.072016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.072219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.072268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.072376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.072401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.072521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.072543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.072647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.072670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.072836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.653 [2024-10-14 16:53:32.072856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.653 qpair failed and we were unable to recover it. 00:28:27.653 [2024-10-14 16:53:32.072960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.072981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.073135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.073156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.073375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.073396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.073551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.073572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.073757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.073780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.073883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.073903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.074051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.074072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.074289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.074310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.074481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.074502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.074588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.074616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.074767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.074788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.074963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.074999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.075244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.075277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.075413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.075445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.075551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.075573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.075688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.075711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.075875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.075897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.075993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.076164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.076275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.076387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.076492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.076611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.076847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.076869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.077024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.077045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.077281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.077312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.077432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.077463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.077640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.077673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.077867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.077888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.077982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.078002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.078152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.078173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.078361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.078382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.078475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.078497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.654 [2024-10-14 16:53:32.078590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.654 [2024-10-14 16:53:32.078617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.654 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.078786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.078808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.078890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.078910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.079910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.079930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.080092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.080113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.080352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.080374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.080545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.080569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.080683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.080705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.080819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.080841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.080940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.080963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.081949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.081970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.082091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.082207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.082313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.082504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.082700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.082828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.082988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.083960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.083981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.084142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.084163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.084355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.655 [2024-10-14 16:53:32.084388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.655 qpair failed and we were unable to recover it. 00:28:27.655 [2024-10-14 16:53:32.084492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.084523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.084708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.084742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.084871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.084892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.084976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.084996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.085142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.085163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.085259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.085281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.085488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.085521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.085639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.085672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.085805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.085836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.085962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.085993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.086135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.086167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.086285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.086315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.086429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.086461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.086699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.086732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.086866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.086886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.086989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.087107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.087225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.087422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.087557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.087696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.087810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.087831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.088935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.088958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.089117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.089138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.089290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.089311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.089468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.089490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.089595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.089625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.089718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.089740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.089920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.656 [2024-10-14 16:53:32.089941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.656 qpair failed and we were unable to recover it. 00:28:27.656 [2024-10-14 16:53:32.090046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.090311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.090434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.090538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.090663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.090768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.090921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.090942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.091025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.091046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.091164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.091185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.091347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.091369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.091546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.091581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.091722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.091754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.092019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.092052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.092235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.092259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.092385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.092416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.092668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.092702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.092876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.092907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.093018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.093049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.093169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.093201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.093374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.093405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.093524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.093545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.093648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.093670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.093895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.093916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.094947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.094968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.095133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.095154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.095327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.095359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.095482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.095513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.095684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.095717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.095840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.095872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.095984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.096015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.657 [2024-10-14 16:53:32.096141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.657 [2024-10-14 16:53:32.096172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.657 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.096284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.096316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.096422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.096454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.096554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.096575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.096678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.096700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.096867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.096888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.096987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.097781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.097998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.098029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.098351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.098421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.098623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.098659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.098838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.098870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.098983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.099014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.099144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.099174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.099343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.099374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.099492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.099515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.099718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.099750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.099924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.099956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.100075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.100107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.100225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.100256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.100445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.100476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.100736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.100769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.100934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.100964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.101115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.101136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.101235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.101256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.101417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.101448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.101555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.101586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.101727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.101759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.101950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.101982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.658 qpair failed and we were unable to recover it. 00:28:27.658 [2024-10-14 16:53:32.102173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.658 [2024-10-14 16:53:32.102205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.102389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.102419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.102611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.102633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.102727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.102748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.102902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.102923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.103017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.103038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.103204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.103225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.103448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.103469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.103575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.103596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.103841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.103863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.103962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.103983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.104134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.104155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.104314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.104335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.104489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.104511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.104695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.104718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.104822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.104843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.104925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.104947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.105098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.105118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.105288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.105309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.105471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.105503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.105675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.105708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.105880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.105911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.106043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.106073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.106185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.106217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.106384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.106415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.106612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.106651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.106917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.106948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.107120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.107150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.107408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.107439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.107680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.107702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.107853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.107873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.107953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.107974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.108132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.659 [2024-10-14 16:53:32.108152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.659 qpair failed and we were unable to recover it. 00:28:27.659 [2024-10-14 16:53:32.108312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.108337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.108501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.108522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.108623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.108645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.108736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.108757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.108852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.108873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.108971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.108992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.109110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.109132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.109350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.109371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.109478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.109499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.109617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.109639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.109725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.109746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.109829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.109850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.110857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.110878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.111094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.111114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.111195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.111216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.111311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.111332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.111426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.111448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.111681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.111702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.111876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.111906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.112088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.112118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.112232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.112263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.112453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.112484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.112666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.112688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.112799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.112819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.112968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.112989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.113138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.113159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.113322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.113343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.113492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.113513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.113611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.113632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.113783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.113804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.660 [2024-10-14 16:53:32.113883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.660 [2024-10-14 16:53:32.113904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.660 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.114055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.114075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.114165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.114186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.114339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.114360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.114535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.114571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.114715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.114748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.114959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.114990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.115098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.115127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.115245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.115276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.115487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.115517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.115762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.115803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.115886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.115907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.116122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.116143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.116294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.116315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.116547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.116568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.116736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.116758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.116863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.116884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.117146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.117166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.117386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.117408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.117556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.117577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.117750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.117772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.117917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.117937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.118093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.118115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.118275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.118305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.118492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.118522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.118644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.118676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.118984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.119015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.119131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.119162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.119287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.119319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.119587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.119644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.119822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.119844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.120015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.120053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.120181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.120212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.120389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.120421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.120593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.120636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.120769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.120790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.121011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.121031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.121129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.661 [2024-10-14 16:53:32.121150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.661 qpair failed and we were unable to recover it. 00:28:27.661 [2024-10-14 16:53:32.123834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.123870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.124129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.124150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.124365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.124386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.124608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.124631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.124799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.124821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.124996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.125027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.125126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.125158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.125427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.125459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.125651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.125683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.125890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.125921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.126102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.126133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.126395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.126426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.126543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.126563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.126680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.126702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.126790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.126811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.126953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.126974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.127065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.127086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.127308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.127339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.127529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.127561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.127855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.127886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.128064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.128096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.128229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.128260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.128392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.128422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.128619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.128652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.128942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.128974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.129102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.129132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.129346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.129377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.129557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.129577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.129675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.129697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.129890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.129911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.130017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.130038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.130285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.130328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.130497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.130528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.130668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.130706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.130882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.130913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.131139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.131161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.131253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.131274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.131381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.131401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.131550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.131572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.131735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.662 [2024-10-14 16:53:32.131757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.662 qpair failed and we were unable to recover it. 00:28:27.662 [2024-10-14 16:53:32.131954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.131975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.132151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.132172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.132339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.132360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.132468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.132489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.132709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.132731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.132918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.132938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.133086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.133128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.133318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.133350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.133591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.133629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.133803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.133824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.133998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.134030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.134266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.134296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.134558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.134590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.134774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.134806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.134971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.134992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.135208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.135240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.135365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.135396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.135589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.135636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.135882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.135903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.136077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.136098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.136263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.136284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.136394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.136415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.136676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.136698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.136797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.136818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.136985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.137005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.137179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.137200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.137373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.137404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.137626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.137658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.137839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.137871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.137993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.138014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.138212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.138233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.138475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.138496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.138610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.138632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.663 [2024-10-14 16:53:32.138800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.663 [2024-10-14 16:53:32.138824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.663 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.138922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.138943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.139186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.139208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.139455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.139476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.139568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.139590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.139813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.139835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.139995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.140015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.140208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.140239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.140476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.140506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.140622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.140654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.140860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.140891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.141184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.141214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.141352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.141383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.141498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.141529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.141742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.141775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.142020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.142052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.142304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.142325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.142579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.142605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.142699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.142719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.142892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.142912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.143004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.143024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.143256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.143277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.143460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.143481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.143589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.143629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.143779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.143799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.143962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.143983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.144093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.144114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.144283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.144304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.144412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.144433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.144594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.144624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.144717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.144739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.144839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.144859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.145046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.145067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.145247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.145268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.145363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.145384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.145485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.145506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.145616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.145638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.145801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.145823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.146028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.146049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.146211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.664 [2024-10-14 16:53:32.146232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.664 qpair failed and we were unable to recover it. 00:28:27.664 [2024-10-14 16:53:32.146310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.146334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.146442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.146463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.146621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.146643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.146829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.146850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.146948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.146969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.147127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.147148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.147260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.147282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.147470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.147491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.147742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.147764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.148002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.148023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.148271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.148293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.148399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.148420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.148584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.148610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.148792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.148813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.149037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.149068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.149301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.149332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.149630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.149668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.149782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.149803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.149981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.150001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.150220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.150240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.150400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.150421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.150598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.150625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.150778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.150799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.151018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.151039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.151119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.151139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.151298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.151320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.151540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.151561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.151821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.151844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.152010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.152031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.152204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.152234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.152421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.152452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.152576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.152617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.152809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.152830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.152992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.153013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.665 [2024-10-14 16:53:32.153106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.665 [2024-10-14 16:53:32.153126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.665 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.153291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.153312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.153461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.153483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.153595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.153623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.153863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.153884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.154033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.154054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.154220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.154245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.154404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.154425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.154513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.154532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.154652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.154676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.154859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.154880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.155865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.155885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.156938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.156968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.157151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.157181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.157288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.157319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.157503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.157533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.157655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.157690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.157844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.157864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.158029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.158050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.158198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.158219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.158411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.158432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.158534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.158555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.158704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.158726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.158838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.158858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.159087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.159108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.159201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.159222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.159459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.159480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.159648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.666 [2024-10-14 16:53:32.159670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.666 qpair failed and we were unable to recover it. 00:28:27.666 [2024-10-14 16:53:32.159770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.159793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.159889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.159910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.160157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.160177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.160334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.160356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.160516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.160553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.160730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.160757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.160861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.160886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.160979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.160999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.161108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.161129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.161210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.161230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.161322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.161343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.161470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.161491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.161659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.161682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.161787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.161807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.162026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.162048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.162161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.162181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.162368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.162389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.162497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.162517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.162686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.162726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.162843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.162875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.163064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.163095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.163331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.163362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.163478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.163509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.163644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.163676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.163826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.163857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.164096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.164117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.164266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.164287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.164377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.164398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.164485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.164506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.164644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.164714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.165040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.165075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.165335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.165368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.165615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.165648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.165874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.165906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.166107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.166138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.166330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.166361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.166530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.667 [2024-10-14 16:53:32.166561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.667 qpair failed and we were unable to recover it. 00:28:27.667 [2024-10-14 16:53:32.166789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.166822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.167011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.167043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.167220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.167250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.167431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.167465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.167588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.167648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.167886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.167917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.168046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.168067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.168163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.168183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.168350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.168372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.168532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.168568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.168769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.168802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.168982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.169013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.169133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.169154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.169314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.169335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.169434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.169455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.169590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.169617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.169860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.169882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.170032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.170053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.170250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.170281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.170489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.170520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.170769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.170802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.170967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.170989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.171086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.171107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.171275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.171296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.171484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.171505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.171652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.171674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.171835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.171856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.172039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.172070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.172246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.172278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.172408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.172438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.172621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.172654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.172784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.172816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.173002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.173022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.173184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.173215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.173382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.173413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.173542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.173572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.173829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.173862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.173989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.174019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.174203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.174233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.174440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.174472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.668 [2024-10-14 16:53:32.174666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.668 [2024-10-14 16:53:32.174688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.668 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.174924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.174945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.175051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.175151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.175334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.175508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.175635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.175894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.175992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.176012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.176285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.176309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.176420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.176441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.176532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.176556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.176707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.176729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.176898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.176919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.177093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.177115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.177291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.177322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.177586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.177625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.177817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.177847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.178023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.178054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.178182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.178213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.178471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.178502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.178621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.178653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.178768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.178789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.179014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.179035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.179277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.179298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.179455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.179476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.179568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.179592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.179705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.179727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.179988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.180019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.180256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.180287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.180411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.180442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.180582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.180622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.180832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.180853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.180954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.180976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.181124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.181145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.181362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.181394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.181527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.181559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.181671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.181702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.181958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.181979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.182191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.182212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.182448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.182477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.182588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.182631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.182886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.182918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.183224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.183245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.183331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.183350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.183468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.183489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.183731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.183753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.183846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.183866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.184109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.184130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.184290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.184314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.184505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.184526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.184765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.184796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.184971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.185002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.185287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.185318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.185533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.185564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.185783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.185815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.186069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.186090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.186243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.186264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.186466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.186497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.669 [2024-10-14 16:53:32.186685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.669 [2024-10-14 16:53:32.186717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.669 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.186955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.186985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.187247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.187268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.187424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.187445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.187559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.187581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.187686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.187707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.187806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.187827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.187987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.188009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.188093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.188112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.188263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.188284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.188502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.188523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.188622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.188643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.188822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.188843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.189001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.189023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.189131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.189153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.189417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.189448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.189565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.189596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.189730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.189762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.190000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.190030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.190273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.190303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.190413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.190445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.190619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.190651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.190783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.190814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.191005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.191036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.191207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.191227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.191323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.191344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.191563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.191584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.191690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.191711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.191936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.191976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.192185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.192216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.192329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.192365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.192547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.192578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.192766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.192797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.192978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.192999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.193112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.193132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.193371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.193392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.193569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.193590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.193773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.193794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.193894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.193915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.193998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.194117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.194289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.194493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.194662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.194798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.194965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.194986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.195085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.195105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.195211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.195232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.195488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.195509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.195610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.195632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.195718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.195738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.195877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.195946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.196145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.196180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.196358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.196390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.196635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.196669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.196862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.196894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.197077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.197108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.670 [2024-10-14 16:53:32.197297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.670 [2024-10-14 16:53:32.197319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.670 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.197547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.197568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.197663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.197683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.197759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.197783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.197979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.197999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.198118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.198139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.198251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.198272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.198488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.198508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.198656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.198678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.198946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.198968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.199072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.199093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.199240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.199261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.199489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.199510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.199672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.199697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.199777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.199797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.199909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.199931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.200086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.200106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.200286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.200307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.200476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.200497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.200728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.200750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.200863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.200884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.200965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.200984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.201200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.201221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.201298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.201318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.201476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.201496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.201664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.201686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.201876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.201896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.202144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.202166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.202315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.202336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.202451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.202472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.202715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.202738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.202902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.202922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.203117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.203148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.203270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.203300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.203428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.203459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.203658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.203691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.203816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.203837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.203927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.203947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.204038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.204059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.204238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.204258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.204379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.204400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.204564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.204585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.204814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.204835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.204997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.205018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.205202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.205234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.205404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.205434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.205647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.205679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.205875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.205896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.205992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.206012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.206107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.206128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.206243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.206264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.206341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.671 [2024-10-14 16:53:32.206360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.671 qpair failed and we were unable to recover it. 00:28:27.671 [2024-10-14 16:53:32.206509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.206530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.206615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.206640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.206789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.206810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.206970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.206991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.207090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.207111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.207274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.207295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.207568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.207589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.207691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.207713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.207896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.207917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.208023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.208044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.208203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.208224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.208385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.208406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.208555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.208575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.208755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.208777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.208925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.208945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.209050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.209072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.209175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.209195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.209457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.209478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.209573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.209594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.209777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.209798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.209966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.209996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.210185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.210216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.210322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.210352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.210522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.210553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.210810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.210842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.211051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.211082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.211296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.211317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.211538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.211569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.211706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.211738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.211988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.212019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.212185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.212207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.212452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.212483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.212656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.212689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.212867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.212899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.213029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.213049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.213221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.213242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.213351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.213372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.213537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.213558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.213675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.213697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.213851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.213872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.214047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.214067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.214154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.214179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.214366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.214386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.214608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.214630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.214796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.214817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.214902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.214922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.215039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.215059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.215148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.215168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.215351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.215372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.215476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.215496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.215743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.215764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.215926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.215947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.216108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.216129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.216376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.216406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.216537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.216568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.216775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.216808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.216995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.217016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.672 qpair failed and we were unable to recover it. 00:28:27.672 [2024-10-14 16:53:32.217115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.672 [2024-10-14 16:53:32.217136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.217322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.217343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.217499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.217519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.217629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.217651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.217807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.217828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.218020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.218051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.218223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.218253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.218388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.218419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.218536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.218566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.218848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.218881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.219057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.219079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.219167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.219189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.219354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.219375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.219455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.219475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.219716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.219738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.219906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.219927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.220106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.220137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.220320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.220350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.220624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.220657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.220800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.220832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.221002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.221033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.221151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.221182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.221303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.221324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.221488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.221508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.221679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.221702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.221945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.221966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.222116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.222137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.222234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.222255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.222365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.222387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.222628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.222651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.222854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.222885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.222995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.223026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.223273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.223304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.223498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.223528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.223756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.223779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.223878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.223898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.223982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.224003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.224166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.224187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.224366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.224388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.224557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.224578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.224862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.224884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.225054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.225075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.225241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.225262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.225428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.225448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.225533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.225552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.225743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.225765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.225917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.225937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.226101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.226122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.226287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.226308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.226401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.226422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.226640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.226662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.226834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.226859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.226950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.226970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.227068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.227089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.227248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.227269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.227371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.227391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.227572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.227593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.673 [2024-10-14 16:53:32.227795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.673 [2024-10-14 16:53:32.227816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.673 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.227973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.227994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.228093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.228113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.228353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.228375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.228536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.228556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.228711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.228733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.228910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.228930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.229039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.229061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.229223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.229244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.229405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.229426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.229590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.229617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.229720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.229741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.229829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.229848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.230941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.230962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.231132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.231153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.231316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.231338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.231449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.231470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.231621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.231644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.231732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.231752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.231860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.231880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.232913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.232934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.233082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.233103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.233302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.233327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.233497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.233518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.233612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.233634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.233859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.233880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.234026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.234047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.234128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.234147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.234230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.234251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.234406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.234427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.674 [2024-10-14 16:53:32.234508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.674 [2024-10-14 16:53:32.234529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.674 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.234719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.234742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.234849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.234871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.235832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.235853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.236022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.236042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.959 [2024-10-14 16:53:32.236136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.959 [2024-10-14 16:53:32.236157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.959 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.236265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.236286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.236381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.236403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.236492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.236512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.236676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.236699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.236860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.236881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.237031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.237052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.237199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.237220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.237383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.237404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.237493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.237515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.237663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.237685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.237923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.237944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.238120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.238141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.238363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.238383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.238536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.238557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.238725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.238746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.238894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.238915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.239075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.239095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.239254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.239275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.239454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.239474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.239642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.239663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.239780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.239805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.239904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.239925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.240969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.240990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.241911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.241931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.242081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.242102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.242186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.242205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.242426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.960 [2024-10-14 16:53:32.242447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.960 qpair failed and we were unable to recover it. 00:28:27.960 [2024-10-14 16:53:32.242654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.242676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.242841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.242863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.243076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.243097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.243252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.243272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.243438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.243459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.243563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.243583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.243844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.243867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.243963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.243983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.244945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.244966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.245132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.245152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.245327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.245348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.245439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.245460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.245628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.245649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.245890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.245911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.246077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.246098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.246337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.246360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.246528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.246549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.246709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.246731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.246894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.246914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.247157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.247178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.247284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.247304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.247464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.247485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.247591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.247617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.247778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.247799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.248049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.248069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.248235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.248256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.248473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.248495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.248686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.248708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.248809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.248829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.249072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.249093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.249243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.249264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.249369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.249390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.249658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.249680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.961 qpair failed and we were unable to recover it. 00:28:27.961 [2024-10-14 16:53:32.249795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.961 [2024-10-14 16:53:32.249816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.249929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.249950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.250060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.250082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.250296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.250317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.250420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.250441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.250527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.250547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.250644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.250667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.250859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.250880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.251062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.251084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.251186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.251206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.251452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.251473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.251567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.251588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.251757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.251779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.251999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.252020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.252176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.252197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.252442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.252463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.252615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.252637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.252813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.252834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.252996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.253018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.253184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.253205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.253419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.253441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.253592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.253618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.253770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.253796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.254015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.254036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.254217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.254238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.254458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.254479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.254716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.254738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.254967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.254987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.255137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.255158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.255400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.255421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.255610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.255632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.255746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.255766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.255940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.255961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.256122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.256142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.256250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.256270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.256497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.256518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.256674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.256696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.256789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.256810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.256920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.256941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.962 qpair failed and we were unable to recover it. 00:28:27.962 [2024-10-14 16:53:32.257021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.962 [2024-10-14 16:53:32.257042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.257212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.257233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.257312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.257331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.257442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.257462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.257555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.257575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.257676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.257697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.257883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.257904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.258830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.258850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.259009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.259030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.259130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.259150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.259422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.259444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.259593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.259640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.259821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.259842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.260012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.260033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.260183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.260203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.260297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.260318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.260477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.260498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.260675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.260701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.260924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.260945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.261165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.261185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.261334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.261355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.261442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.261463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.261581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.261614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.261755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.261826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.262087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.262121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.262352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.262375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.262536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.262556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.262719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.262740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.262986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.263007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.263120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.263140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.263305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.263326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.263497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.263519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.263612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.263633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.263802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.963 [2024-10-14 16:53:32.263823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.963 qpair failed and we were unable to recover it. 00:28:27.963 [2024-10-14 16:53:32.263982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.264002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.264166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.264187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.264289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.264309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.264524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.264545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.264717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.264739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.264890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.264910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.265141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.265162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.265274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.265295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.265444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.265465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.265705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.265726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.265880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.265901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.266073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.266094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.266264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.266284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.266503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.266524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.266684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.266706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.266941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.266962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.267132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.267154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.267335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.267355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.267586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.267627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.267845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.267866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.268026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.268047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.268201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.268222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.268490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.268511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.268608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.268635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.268721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.268741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.268836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.268857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.269913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.269934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.270097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.964 [2024-10-14 16:53:32.270118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.964 qpair failed and we were unable to recover it. 00:28:27.964 [2024-10-14 16:53:32.270262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.270283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.270374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.270393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.270481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.270501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.270605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.270628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.270746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.270767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.270981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.271002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.271170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.271191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.271344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.271365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.271546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.271567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.271753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.271776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.271955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.271977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.272174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.272195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.272278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.272298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.272404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.272425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.272705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.272727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.272964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.272985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.273106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.273140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.273310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.273327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.273466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.273481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.273564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.273579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.273658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.273674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.273882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.273902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.274117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.274136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.274226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.274256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.274386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.274416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.274522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.274545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.274729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.274751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.274934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.274954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.275126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.275147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.275294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.275318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.275480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.275502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.275583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.275619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.275803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.275824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.275984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.276184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.276314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.276431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.276549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.276729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.965 [2024-10-14 16:53:32.276908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.965 [2024-10-14 16:53:32.276929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.965 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.277090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.277112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.277284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.277304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.277405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.277426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.277577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.277598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.277768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.277789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.278039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.278153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.278291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.278495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.278616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.278830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.278992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.279013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.279204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.279225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.279405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.279425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.279523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.279544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.279713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.279735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.279840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.279861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.280015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.280036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.280225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.280246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.280405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.280426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.280578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.280599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.280721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.280742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.280891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.280912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.281149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.281170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.281324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.281345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.281504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.281525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.281672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.281694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.281844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.281865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.282955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.282975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.283119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.283140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.283381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.283402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.283596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.283625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.283873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.966 [2024-10-14 16:53:32.283894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.966 qpair failed and we were unable to recover it. 00:28:27.966 [2024-10-14 16:53:32.283992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.284012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.284109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.284129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.284375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.284395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.284499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.284520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.284620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.284641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.284795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.284816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.284993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.285110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.285330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.285417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.285583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.285715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.285886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.285911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.286005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.286028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.286194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.286218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.286462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.286486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.286608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.286633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.286728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.286744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.286973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.286989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.287937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.287952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.288160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.288184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.288365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.288388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.288545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.288569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.288774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.288800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.288967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.288995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.289105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.289126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.289277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.289298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.289528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.289548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.289630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.289663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.289879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.289899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.290012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.290033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.290250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.290271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.290418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.290440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.967 [2024-10-14 16:53:32.290608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.967 [2024-10-14 16:53:32.290630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.967 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.290745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.290765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.290862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.290883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.290967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.290987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.291265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.291286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.291384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.291405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.291505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.291526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.291682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.291704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.291954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.291975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.292125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.292150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.292261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.292283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.292494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.292517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.292691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.292715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.292867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.292891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.293058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.293082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.293203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.293221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.293310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.293329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.293469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.293484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.293673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.293696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.293779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.293800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.294888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.294989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.295010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.295173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.295194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.295343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.295364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.295520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.295541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.295701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.295731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.295828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.295849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.295998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.296019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.296131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.296152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.968 qpair failed and we were unable to recover it. 00:28:27.968 [2024-10-14 16:53:32.296257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.968 [2024-10-14 16:53:32.296278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.296433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.296454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.296567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.296587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.296747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.296769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.296924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.296944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.297112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.297133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.297243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.297264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.297432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.297453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.297619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.297641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.297805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.297825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.298005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.298027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.298121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.298142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.298247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.298268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.298482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.298503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.298696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.298718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.298946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.298966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.299119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.299139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.299239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.299258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.299353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.299373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.299536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.299555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.299652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.299673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.299838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.299858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.300017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.300037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.300195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.300215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.300399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.300419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.300509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.300527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.300676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.300696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.300860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.300879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.301101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.301121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.301295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.301314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.301553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.301573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.301771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.301792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.301900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.301922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.302077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.302097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.302264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.302283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.302364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.302384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.302542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.302562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.302677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.302698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.302863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.302883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.303028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.969 [2024-10-14 16:53:32.303048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.969 qpair failed and we were unable to recover it. 00:28:27.969 [2024-10-14 16:53:32.303158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.303177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.303339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.303359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.303517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.303537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.303705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.303725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.303933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.303953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.304117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.304136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.304382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.304402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.304558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.304578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.304842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.304864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.305039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.305060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.305225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.305246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.305441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.305462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.305617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.305639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.305877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.305898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.306016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.306037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.306199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.306219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.306433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.306453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.306622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.306644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.306790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.306811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.306916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.306937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.307086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.307106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.307202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.307223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.307406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.307426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.307513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.307537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.307706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.307729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.307886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.307907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.308012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.308033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.308181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.308201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.308359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.308379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.308484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.308505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.308673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.308696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.308849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.308870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.309032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.309053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.309213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.309235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.309327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.309347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.309551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.309640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.309979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.310049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.310260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.310294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.310557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.970 [2024-10-14 16:53:32.310579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.970 qpair failed and we were unable to recover it. 00:28:27.970 [2024-10-14 16:53:32.310853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.310876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.310977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.310997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.311168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.311188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.311346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.311367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.311471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.311492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.311779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.311801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.311883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.311906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.312032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.312212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.312356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.312631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.312778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.312893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.312993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.313107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.313310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.313444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.313623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.313732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.313925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.313946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.314045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.314066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.314216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.314237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.314357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.314377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.314558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.314578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.314803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.314830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.314922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.314943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.315044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.315064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.315221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.315242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.315392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.315413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.315490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.315511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.315746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.315767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.315984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.316160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.316333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.316447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.316547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.316786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.316905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.316926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.317078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.971 [2024-10-14 16:53:32.317099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.971 qpair failed and we were unable to recover it. 00:28:27.971 [2024-10-14 16:53:32.317320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.317341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.317429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.317450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.317613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.317635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.317729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.317752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.317914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.317935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.318016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.318036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.318267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.318288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.318506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.318527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.318698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.318720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.318890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.318910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.319002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.319023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.319192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.319212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.319310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.319331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.319481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.319501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.319656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.319679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.319835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.319855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.320863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.320884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.321049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.321070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.321353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.321375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.321654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.321679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.321843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.321864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.322022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.322043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.322203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.322223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.322437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.322458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.322621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.322643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.322750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.322770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.322920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.322940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.323052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.323073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.323155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.323175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.972 [2024-10-14 16:53:32.323360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.972 [2024-10-14 16:53:32.323381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.972 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.323555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.323577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.323752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.323775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.323943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.323964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.324130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.324151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.324361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.324381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.324546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.324567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.324742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.324764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.324937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.324958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.325050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.325071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.325157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.325178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.325269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.325290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.325505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.325525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.325688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.325710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.325929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.325949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.326970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.326990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.327137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.327157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.327260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.327281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.327431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.327452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.327621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.327642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.327749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.327769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.327872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.327892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.328966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.328987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.329251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.329272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.329361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.329382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.329549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.329570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.329727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.329751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.973 [2024-10-14 16:53:32.329928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.973 [2024-10-14 16:53:32.329949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.973 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.330112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.330132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.330299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.330320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.330483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.330504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.330622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.330644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.330804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.330825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.331068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.331089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.331183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.331204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.331356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.331377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.331473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.331494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.331664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.331685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.331832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.331853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.332026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.332047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.332262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.332283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.332444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.332465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.332654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.332676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.332780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.332800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.332960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.332981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.333135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.333156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.333261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.333281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.333508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.333529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.333640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.333662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.333769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.333789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.333958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.333979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.334208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.334229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.334500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.334521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.334757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.334780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.334948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.334968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.335081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.335102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.335205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.335225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.335385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.335406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.335572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.335597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.335825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.335846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.336955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.974 [2024-10-14 16:53:32.336974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.974 qpair failed and we were unable to recover it. 00:28:27.974 [2024-10-14 16:53:32.337074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.337095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.337178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.337197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.337295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.337316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.337532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.337554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.337771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.337792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.337897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.337918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.338079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.338101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.338269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.338289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.338382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.338402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.338499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.338520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.338758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.338780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.339952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.339973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.340166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.340187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.340354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.340375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.340523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.340544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.340626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.340646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.340809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.340830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.340982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.341189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.341377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.341553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.341677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.341809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.341938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.341960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.342071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.342091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.342260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.342284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.342383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.342404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.342594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.342622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.342772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.342794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.342953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.342975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.343075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.343096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.343192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.343213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.343358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.343379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.975 [2024-10-14 16:53:32.343540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.975 [2024-10-14 16:53:32.343561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.975 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.343721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.343742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.343830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.343851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.344043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.344063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.344304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.344325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.344432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.344453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.344540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.344561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.344711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.344733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.344884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.344906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.345064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.345084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.345185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.345206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.345357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.345379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.345641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.345663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.345894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.345915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.346101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.346122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.346226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.346246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.346416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.346436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.346686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.346708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.346881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.346902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.347966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.347978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.348072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.348084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.348235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.348250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.348324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.348335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.348587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.348608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.348708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.348726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.348820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.348839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.349018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.349045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.349212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.349234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.349401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.349421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.349649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.349666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.349818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.349832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.349929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.349942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.350077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.350090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.350227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.976 [2024-10-14 16:53:32.350242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.976 qpair failed and we were unable to recover it. 00:28:27.976 [2024-10-14 16:53:32.350425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.350438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.350617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.350633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.350707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.350719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.350971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.350992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.351101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.351120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.351356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.351378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.351545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.351564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.351723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.351737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.351874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.351888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.351981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.351995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.352215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.352230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.352373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.352387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.352484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.352498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.352586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.352604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.352700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.352713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.352919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.352940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.353158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.353179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.353257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.353276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.353517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.353538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.353731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.353756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.353936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.353956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.354111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.354132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.354277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.354298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.354515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.354536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.354640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.354662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.354767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.354788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.355032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.355053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.355214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.355235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.355402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.355423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.355614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.355635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.355752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.355774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.355864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.355885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.356050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.356076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.356290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.356311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.356495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.356515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.977 qpair failed and we were unable to recover it. 00:28:27.977 [2024-10-14 16:53:32.356616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.977 [2024-10-14 16:53:32.356643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.356741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.356758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.356847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.356863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.356966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.356982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.357148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.357174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.357273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.357297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.357493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.357517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.357673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.357699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.357895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.357919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.358023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.358048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.358228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.358254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.358419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.358444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.358622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.358648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.358825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.358850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.359007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.359030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.359186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.359210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.359377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.359402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.359513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.359536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.359710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.359736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.359849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.359873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.360060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.360084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.360351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.360377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.360530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.360555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.360665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.360692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.360867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.360890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.361066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.361180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.361361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.361481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.361651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.361779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.361997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.362124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.362292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.362464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.362593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.362786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.362903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.362928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.363033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.363054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.978 [2024-10-14 16:53:32.363145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.978 [2024-10-14 16:53:32.363165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.978 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.363333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.363355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.363439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.363460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.363646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.363668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.363837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.363858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.364919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.364941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.365110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.365131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.365359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.365381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.365501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.365521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.365617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.365639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.365800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.365821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.366040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.366061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.366220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.366241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.366409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.366431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.366647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.366670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.366829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.366850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.366947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.366968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.367209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.367229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.367382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.367403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.367512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.367542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.367666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.367688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.367778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.367798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.368063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.368082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.368265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.368279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.368413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.368426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.368606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.368623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.368700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.368712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.368864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.368878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.369077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.369090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.369237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.369250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.369337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.369349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.369560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.369582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.369780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.369807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.369910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.369931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.370095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.979 [2024-10-14 16:53:32.370116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.979 qpair failed and we were unable to recover it. 00:28:27.979 [2024-10-14 16:53:32.370276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.370297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.370445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.370466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.370643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.370660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.370794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.370807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.370899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.370913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.370986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.370999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.371151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.371165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.371249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.371261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.371332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.371345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.371491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.371504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.371658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.371673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.371833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.371853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.372979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.372992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.373087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.373099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.373234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.373249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.373380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.373393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.373596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.373615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.373850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.373874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.374028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.374050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.374236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.374257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.374359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.374381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.374481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.374502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.374664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.374687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.374793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.374815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.375075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.375096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.375249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.375270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.375438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.375459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.375634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.375656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.375831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.375852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.376011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.376032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.376114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.376139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.376301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.376322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.980 [2024-10-14 16:53:32.376488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.980 [2024-10-14 16:53:32.376509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.980 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.376617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.376640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.376718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.376737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.376902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.376923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.377085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.377106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.377286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.377307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.377458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.377479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.377628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.377652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.377748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.377768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.377935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.377956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.378948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.378962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.379060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.379083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.379179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.379200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.379385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.379408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.379556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.379578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.379732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.379755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.379916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.379935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.380038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.380052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.380213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.380228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.380380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.380403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.380566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.380587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.380814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.380836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.381005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.381026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.381198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.381219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.381457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.381478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.381627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.381649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.381745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.381766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.381953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.381975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.382190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.382211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.382356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.382377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.981 [2024-10-14 16:53:32.382537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.981 [2024-10-14 16:53:32.382558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.981 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.382716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.382737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.382902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.382928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.383046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.383067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.383287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.383307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.383477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.383498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.383670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.383693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.383786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.383807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.383965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.383986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.384141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.384163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.384267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.384288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.384385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.384406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.384573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.384594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.384820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.384841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.385059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.385080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.385324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.385344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.385497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.385518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.385684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.385706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.385928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.385950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.386105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.386125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.386241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.386262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.386475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.386496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.386720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.386742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.386913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.386934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.387114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.387135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.387236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.387256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.387421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.387442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.387703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.387725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.387889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.387910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.388012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.388040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.388227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.388248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.388343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.388364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.388532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.388553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.388724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.388745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.388959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.388981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.389078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.389099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.389198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.389219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.389468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.389489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.389704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.389726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.389877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.389898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.982 qpair failed and we were unable to recover it. 00:28:27.982 [2024-10-14 16:53:32.389999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.982 [2024-10-14 16:53:32.390021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.390126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.390147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.390304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.390325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.390548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.390569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.390807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.390829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.391045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.391066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.391234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.391255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.391415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.391436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.391661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.391683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.391849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.391869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.392972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.392994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.393229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.393343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.393538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.393649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.393774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.393887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.393994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.394015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.394120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.394140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.394295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.394316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.394558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.394579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.394691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.394713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.394874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.394895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.395047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.395073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.395255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.395276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.395428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.395449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.395622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.395644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.395727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.395746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.395920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.395942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.396159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.396179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.396328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.396349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.396531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.396552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.396701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.396724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.396822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.396843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.983 [2024-10-14 16:53:32.396986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.983 [2024-10-14 16:53:32.397007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.983 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.397172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.397192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.397438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.397459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.397547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.397568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.397723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.397744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.397922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.397943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.398159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.398180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.398419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.398440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.398607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.398629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.398794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.398815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.399030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.399051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.399212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.399232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.399400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.399421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.399597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.399638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.399806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.399826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.399979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.400966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.400986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.401914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.401939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.402126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.402147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.402236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.402257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.402429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.402450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.402605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.402627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.402774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.402795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.402956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.402977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.403133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.403154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.403318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.403337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.403484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.403505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.403656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.403677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.984 [2024-10-14 16:53:32.403828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.984 [2024-10-14 16:53:32.403849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.984 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.403999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.404021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.404188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.404209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.404404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.404425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.404664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.404686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.404847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.404867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.405026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.405048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.405155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.405175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.405418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.405439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.405631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.405654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.405755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.405775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.405888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.405909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.406007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.406028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.406280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.406300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.406519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.406540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.406637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.406659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.406766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.406787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.406946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.406967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.407135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.407156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.407347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.407368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.407520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.407540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.407778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.407800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.407988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.408160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.408347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.408459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.408632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.408738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.408923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.408943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.409093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.409118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.409226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.409246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.409413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.409434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.409615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.409636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.409789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.409810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.409918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.409938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.410036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.410057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.410250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.985 [2024-10-14 16:53:32.410270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.985 qpair failed and we were unable to recover it. 00:28:27.985 [2024-10-14 16:53:32.410437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.410458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.410563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.410583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.410761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.410782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.410935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.410955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.411053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.411074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.411160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.411181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.411399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.411420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.411570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.411591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.411771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.411792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.411953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.411974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.412213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.412234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.412401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.412421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.412589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.412617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.412794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.412815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.412986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.413007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.413105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.413126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.413273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.413295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.413462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.413483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.413702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.413724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.413875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.413896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.414049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.414070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.414262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.414283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.414512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.414533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.414696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.414718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.414941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.414962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.415225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.415245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.415399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.415420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.415529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.415550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.415716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.415737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.415838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.415860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.416875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.416896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.417005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.417026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.417241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.417262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.986 [2024-10-14 16:53:32.417427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.986 [2024-10-14 16:53:32.417448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.986 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.417606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.417627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.417896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.417917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.418081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.418102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.418323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.418343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.418439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.418460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.418658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.418680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.418883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.418904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.419976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.419997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.420090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.420111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.420255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.420276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.420429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.420450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.420643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.420665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.420879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.420900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.421056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.421078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.421261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.421281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.421373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.421394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.421495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.421515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.421625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.421646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.421793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.421814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.422051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.422071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.422183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.422204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.422312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.422333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.422500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.422521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.422638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.422660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.422820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.422841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.423010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.423030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.423203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.423233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.423471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.423491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.423714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.423736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.423959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.423980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.424083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.424105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.424334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.424354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.987 [2024-10-14 16:53:32.424536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.987 [2024-10-14 16:53:32.424557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.987 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.424645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.424665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.424907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.424928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.425106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.425126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.425296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.425317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.425426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.425446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.425689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.425712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.425794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.425814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.425985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.426006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.426172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.426192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.426344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.426365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.426483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.426504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.426667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.426689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.426833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.426854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.427898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.427918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.428941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.428962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.429058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.429079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.429231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.429252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.429352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.429372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.429617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.429639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.429743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.429764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.429994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.430014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.430253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.430277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.430387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.430408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.430525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.430546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.430646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.430668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.430829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.430850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.431068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.431089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.988 qpair failed and we were unable to recover it. 00:28:27.988 [2024-10-14 16:53:32.431245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.988 [2024-10-14 16:53:32.431266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.431443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.431465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.431708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.431729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.431910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.431931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.432119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.432141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.432358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.432379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.432544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.432565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.432785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.432806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.433028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.433049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.433207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.433228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.433388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.433409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.433559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.433579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.433769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.433791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.433889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.433909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.434128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.434150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.434371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.434392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.434500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.434521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.434632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.434654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.434914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.434934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.435095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.435115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.435278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.435298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.435341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7bbb0 (9): Bad file descriptor 00:28:27.989 [2024-10-14 16:53:32.435517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.435542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.435713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.435727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.435868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.435879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.436851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.436866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.437974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.989 [2024-10-14 16:53:32.437989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.989 qpair failed and we were unable to recover it. 00:28:27.989 [2024-10-14 16:53:32.438086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.438925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.438941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.439944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.439960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.440968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.440983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.441974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.441985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.442126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.442136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.442277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.442290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.442368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.442382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.990 qpair failed and we were unable to recover it. 00:28:27.990 [2024-10-14 16:53:32.442519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.990 [2024-10-14 16:53:32.442534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.442623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.442638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.442801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.442818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.442915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.442929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.443923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.443933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.444967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.444980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.445145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.445248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.445365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.445524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.445680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.445838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.445987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.446152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.446299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.446407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.446492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.446595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.446812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.446825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.447026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.447044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.447204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.447222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.447303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.447321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.447431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.447450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.447596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.447621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.991 [2024-10-14 16:53:32.447857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.991 [2024-10-14 16:53:32.447874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.991 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.447957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.447970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.448058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.448219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.448382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.448594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.448767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.448925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.448988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.449096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.449197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.449367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.449481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.449657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.449827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.449847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.450974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.450987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.451923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.451940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.452849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.452862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.453060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.453074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.453243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.453256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.453422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.992 [2024-10-14 16:53:32.453442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.992 qpair failed and we were unable to recover it. 00:28:27.992 [2024-10-14 16:53:32.453591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.453614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.453708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.453725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.453806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.453825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.454943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.454956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.455100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.455112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.455177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.455192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.455335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.455351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.455498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.455518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.455670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.455690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.455842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.455861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.456895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.456912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.457873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.457977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.458002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.458178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.458203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.458381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.458409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.458567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.458592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.458695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.458721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.458947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.458973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.459192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.459217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.459337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.993 [2024-10-14 16:53:32.459363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.993 qpair failed and we were unable to recover it. 00:28:27.993 [2024-10-14 16:53:32.459452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.459477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.459571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.459596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.459808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.459835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.459998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.460022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.460225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.460252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.460356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.460381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.460561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.460586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.460804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.460831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.461025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.461050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.461163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.461188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.461364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.461391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.461576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.461609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.461802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.461838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.461928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.461953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.462115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.462139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.462249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.462276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.462472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.462497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.462610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.462637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.462814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.462840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.463008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.463032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.463284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.463311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.463426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.463451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.463713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.463740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.463944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.463970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.464074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.464099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.464282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.464308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.464558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.464584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.464711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.464736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.464961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.464986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.465156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.465182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.465360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.465385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.465540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.465566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.465842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.465869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.466116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.466138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.466305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.466323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.466465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.994 [2024-10-14 16:53:32.466483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.994 qpair failed and we were unable to recover it. 00:28:27.994 [2024-10-14 16:53:32.466578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.466596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.466757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.466776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.466864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.466883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.467037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.467053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.467156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.467173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.467338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.467354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.467501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.467518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.467732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.467752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.467899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.467917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.468876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.468887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.469961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.469978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.470134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.470151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.470289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.470305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.470475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.470487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.470716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.470729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.470802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.470813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.470966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.470978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.471128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.471233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.471410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.471495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.471741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.471916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.471988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.472005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.472099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.472117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.472206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.472223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.472320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.995 [2024-10-14 16:53:32.472338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.995 qpair failed and we were unable to recover it. 00:28:27.995 [2024-10-14 16:53:32.472505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.472519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.472659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.472672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.472764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.472776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.472925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.472937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.473872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.473890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.474881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.474897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.475046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.475058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.475270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.475282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.475434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.475447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.475596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.475614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.475745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.475762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.475952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.475970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.476981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.476997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.477076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.477092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.477224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.477239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.477398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.477413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.477518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.477533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.477610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.477625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.477797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.477819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.478000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.478023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.478107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.996 [2024-10-14 16:53:32.478130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.996 qpair failed and we were unable to recover it. 00:28:27.996 [2024-10-14 16:53:32.478348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.478371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.478476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.478501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.478683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.478699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.478903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.478918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.479070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.479086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.479158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.479172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.479323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.479339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.479568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.479583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.479741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.479764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.479873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.479895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.480059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.480081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.480255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.480278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.480430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.480452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.480701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.480722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.480807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.480822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.480924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.480939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.481969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.481992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.482088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.482110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.482260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.482283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.482478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.482502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.482609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.482632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.482816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.482836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.482941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.482957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.483904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.483919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.484057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.484081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.484239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.484262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.484420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.484445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.997 qpair failed and we were unable to recover it. 00:28:27.997 [2024-10-14 16:53:32.484540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.997 [2024-10-14 16:53:32.484563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.484748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.484766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.484848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.484864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.484935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.484950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.485130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.485206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.485452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.485519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.485787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.485824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.486002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.486034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.486223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.486254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.486446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.486476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.486595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.486638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.486878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.486909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.487096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.487127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.487254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.487285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.487498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.487537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.487740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.487774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.487887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.487919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.488123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.488164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.488342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.488374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.488476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.488507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.488620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.488653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.488821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.488842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.488937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.488958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.489121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.489143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.489292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.489312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.489393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.489413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.489574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.489595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.489778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.489811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.489951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.489982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.490178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.490210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.490343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.490374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.490523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.490558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.490720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.490756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.490941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.490972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.491220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.491250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.491431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.491461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.998 [2024-10-14 16:53:32.491708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.998 [2024-10-14 16:53:32.491741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:27.998 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.491914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.491938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.492045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.492066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.492224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.492245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.492463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.492484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.492662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.492684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.492772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.492793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.492953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.492973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.493087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.493121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.493238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.493270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.493526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.493557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.493790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.493814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.494010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.494032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.494200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.494221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.494318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.494340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.494500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.494521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.494688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.494711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.494879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.494901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.495117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.495148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.495335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.495365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.495482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.495513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.495637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.495676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.495883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.495905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.496028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.496144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.496345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.496465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.496598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.496786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.496980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.497001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.497149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.497169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.497330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.497350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.497523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.497544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.497659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.497682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.497927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.497948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.498132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.498154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.498266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.498286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.498379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.498400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.498569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.498589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.498764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.498785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.498937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.498958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.499122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.999 [2024-10-14 16:53:32.499143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:27.999 qpair failed and we were unable to recover it. 00:28:27.999 [2024-10-14 16:53:32.499248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.499268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.499439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.499460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.499631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.499664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.499851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.499883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.500053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.500084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.500288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.500319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.500628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.500663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.500930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.500962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.501207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.501240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.501384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.501415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.501585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.501627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.501885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.501916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.502946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.502967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.503206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.503243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.503347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.503378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.503500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.503531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.503648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.503682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.503880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.503900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.504064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.504085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.504196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.504217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.504417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.504448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.504715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.504748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.504938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.504968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.505092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.505123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.505249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.505281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.505393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.505424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.505674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.505696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.505929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.505951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.506074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.506190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.506407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.506590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.506732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.506913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.506997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.507017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.507113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.507134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.507389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.000 [2024-10-14 16:53:32.507410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.000 qpair failed and we were unable to recover it. 00:28:28.000 [2024-10-14 16:53:32.507492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.507513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.507735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.507758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.507979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.508899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.508921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.509072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.509093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.509250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.509271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.509435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.509456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.509552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.509573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.509764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.509797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.509906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.509937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.510170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.510202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.510426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.510458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.510632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.510664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.510900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.510931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.511170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.511202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.511373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.511403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.511645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.511679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.511857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.511888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.512078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.512109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.512293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.512323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.512557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.512578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.512838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.512861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.513026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.513049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.513237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.513258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.513431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.513452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.513570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.513591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.513844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.513865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.514028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.514049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.514153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.514174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.514408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.514429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.514525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.514546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.514699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.514722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.514974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.515005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.515191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.515222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.515538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.515569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.515788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.515820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.516066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.001 [2024-10-14 16:53:32.516097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.001 qpair failed and we were unable to recover it. 00:28:28.001 [2024-10-14 16:53:32.516304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.516341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.516527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.516559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.516757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.516789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.516983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.517014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.517135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.517165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.517353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.517385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.517572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.517593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.517842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.517863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.517983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.518004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.518184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.518205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.518458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.518478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.518705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.518728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.518967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.518988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.519107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.519127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.519349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.519381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.519593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.519632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.519819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.519850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.520028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.520060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.520320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.520350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.520541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.520573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.520776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.520808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.521032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.521053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.521213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.521234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.521402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.521423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.521667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.521689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.521856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.521876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.522051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.522072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.522246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.522277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.522535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.522567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.522857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.522889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.523079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.523110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.523252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.523282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.523518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.523549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.523704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.523736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.523918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.523950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.524131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.524162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.524415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.524446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.524622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.524644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.524895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.524925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.525062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.525093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.525287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.525323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.002 qpair failed and we were unable to recover it. 00:28:28.002 [2024-10-14 16:53:32.525587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.002 [2024-10-14 16:53:32.525647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.525835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.525867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.526106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.526136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.526395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.526425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.526632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.526665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.526910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.526941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.527114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.527144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.527430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.527460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.527724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.527748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.527972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.527992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.528236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.528258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.528504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.528524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.528673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.528695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.528943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.528964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.529083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.529104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.529331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.529352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.529544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.529565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.529790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.529812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.530012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.530043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.530251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.530282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.530525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.530555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.530820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.530842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.531083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.531104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.531362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.531382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.531570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.531591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.531788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.531809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.531997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.532027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.532283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.532314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.003 qpair failed and we were unable to recover it. 00:28:28.003 [2024-10-14 16:53:32.532553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.003 [2024-10-14 16:53:32.532583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.532762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.532794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.533080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.533111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.533341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.533372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.533647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.533681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.533945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.533967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.534183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.534204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.534421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.534443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.534708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.534731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.534893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.534914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.535153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.535184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.535394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.535431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.535647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.535680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.535888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.535910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.536126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.536147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.536368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.536399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.536656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.536688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.536870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.536911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.537028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.537049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.537227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.537248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.537449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.537481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.537741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.537774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.538064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.538096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.538369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.538400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.538687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.538720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.538924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.538955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.539153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.539183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.539447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.539477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.539760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.539783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.540015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.540036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.540255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.540277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.540542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.540564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.540743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.540765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.540928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.540958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.541215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.541246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.541532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.541563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.541859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.541892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.542003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.542034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.542221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.542253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.542496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.004 [2024-10-14 16:53:32.542527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.004 qpair failed and we were unable to recover it. 00:28:28.004 [2024-10-14 16:53:32.542817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.542850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.543143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.543174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.543294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.543325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.543583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.543623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.543810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.543841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.544097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.544119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.544288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.544309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.544527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.544558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.544747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.544769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.544943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.544964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.545126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.545147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.545379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.545416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.545680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.545713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.546002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.546032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.546209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.546239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.546454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.546486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.546686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.546708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.546931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.546963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.547210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.547240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.547527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.547559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.547810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.547832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.548075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.548105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.548347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.548378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.548626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.548680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.548843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.548864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.549034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.549069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.549191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.549222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.549487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.549517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.549793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.549826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.550114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.550144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.550399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.550430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.550685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.550719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.550961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.550999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.551151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.551172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.551421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.005 [2024-10-14 16:53:32.551453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.005 qpair failed and we were unable to recover it. 00:28:28.005 [2024-10-14 16:53:32.551714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.551747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.551982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.552012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.552279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.552310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.552439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.552471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.552721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.552753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.553042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.553073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.553265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.553295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.553535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.553566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.553836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.553868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.554156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.554187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.554403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.554434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.554649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.554671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.554891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.554911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.555070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.555092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.555274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.555295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.555539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.555560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.555782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.555808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.555970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.555991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.556267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.556288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.556466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.556488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.556680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.556702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.556853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.556893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.557094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.557125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.557364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.557394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.557633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.557666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.557853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.557874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.558052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.558083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.558286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.558317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.558557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.558587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.558788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.558810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.559088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.559121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.559409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.559440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.559572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.559615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.559826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.559848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.559958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.559979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.560171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.560192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.560422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.560442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.560690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.560713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.560981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.561002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.006 qpair failed and we were unable to recover it. 00:28:28.006 [2024-10-14 16:53:32.561175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.006 [2024-10-14 16:53:32.561196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.561416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.561437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.561597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.561625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.561848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.561879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.562098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.562130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.562316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.562347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.562618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.562650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.562844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.562876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.563137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.563167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.563335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.563366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.563569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.563609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.563914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.563945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.564125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.564146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.564345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.564366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.564615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.564638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.564810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.564831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.565053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.565074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.565348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.565386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.565683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.565717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.565935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.565966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.566222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.566243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.566471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.566492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.566671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.566693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.566873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.566894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.567128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.567159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.567289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.567320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.567442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.567472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.567765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.567798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.568088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.568120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.568294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.568324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.568541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.568572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.568801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.568823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.007 [2024-10-14 16:53:32.569078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.007 [2024-10-14 16:53:32.569099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.007 qpair failed and we were unable to recover it. 00:28:28.283 [2024-10-14 16:53:32.569343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.283 [2024-10-14 16:53:32.569367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.283 qpair failed and we were unable to recover it. 00:28:28.283 [2024-10-14 16:53:32.569618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.283 [2024-10-14 16:53:32.569641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.283 qpair failed and we were unable to recover it. 00:28:28.283 [2024-10-14 16:53:32.569794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.283 [2024-10-14 16:53:32.569815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.283 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.570081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.570102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.570354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.570387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.570710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.570743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.571007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.571030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.571196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.571218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.571391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.571412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.571530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.571552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.571669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.571692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.571941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.571963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.572135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.572156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.572257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.572278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.572514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.572535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.572756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.572779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.572970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.572990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.573175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.573197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.573456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.573486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.573754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.573777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.573947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.573968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.574129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.574150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.574365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.574388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.574563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.574612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.574838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.574875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.575070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.575101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.575236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.575266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.575506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.575537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.575745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.575778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.575954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.575985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.576222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.576245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.576428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.576449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.576716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.576738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.576905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.576926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.577106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.577137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.577329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.577361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.577487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.577518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.577784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.577818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.578105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.578137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.578419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.284 [2024-10-14 16:53:32.578449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.284 qpair failed and we were unable to recover it. 00:28:28.284 [2024-10-14 16:53:32.578625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.578657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.578904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.578935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.579123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.579154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.579419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.579450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.579749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.579782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.579932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.579963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.580088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.580109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.580365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.580387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.580578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.580608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.580808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.580829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.580984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.581005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.581186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.581218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.581488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.581518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.581652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.581686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.581862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.581882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.582063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.582094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.582387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.582417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.582620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.582652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.582902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.582924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.583076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.583097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.583340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.583361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.583530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.583551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.583760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.583783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.584076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.584098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.584358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.584383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.584538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.584560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.584768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.584791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.584961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.584982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.585167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.585188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.585375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.585397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.585551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.585571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.585832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.585855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.586058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.586088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.586328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.586358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.586626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.586659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.285 qpair failed and we were unable to recover it. 00:28:28.285 [2024-10-14 16:53:32.586877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.285 [2024-10-14 16:53:32.586909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.587179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.587210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.587469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.587499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.587699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.587734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.587927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.587948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.588060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.588081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.588203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.588225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.588391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.588412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.588588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.588617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.588783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.588805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.588989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.589019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.589224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.589255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.589440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.589471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.589724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.589756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.589889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.589921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.590163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.590193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.590390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.590422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.590630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.590664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.590863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.590894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.591063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.591084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.591367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.591398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.591673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.591705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.591908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.591929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.592094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.592116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.592294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.592325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.592564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.592596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.592731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.592762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.592965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.592986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.593139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.593160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.593414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.593451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.593722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.593755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.593899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.593930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.594109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.594131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.594301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.594331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.594525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.594556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.594757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.286 [2024-10-14 16:53:32.594789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.286 qpair failed and we were unable to recover it. 00:28:28.286 [2024-10-14 16:53:32.594917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.594937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.595105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.595125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.595371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.595393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.595594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.595624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.595844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.595866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.596058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.596079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.596380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.596401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.596684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.596707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.596955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.596979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.597145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.597166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.597361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.597392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.597618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.597651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.597782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.597813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.598089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.598110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.598426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.598447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.598684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.598706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.598814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.598835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.599056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.599077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.599340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.599362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.599507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.599528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.599736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.599770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.600044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.600076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.600325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.600357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.600560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.600591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.600899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.600930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.601226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.601261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.601531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.601561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.601777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.601810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.602023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.602053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.602201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.602232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.602421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.602452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.287 qpair failed and we were unable to recover it. 00:28:28.287 [2024-10-14 16:53:32.602631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.287 [2024-10-14 16:53:32.602657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.602857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.602887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.603077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.603113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.603389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.603421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.603623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.603656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.603944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.603977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.604165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.604187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.604298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.604319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.604563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.604585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.604751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.604773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.604979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.605010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.605291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.605322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.605563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.605595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.605796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.605818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.605923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.605944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.606186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.606208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.606465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.606487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.606640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.606663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.606873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.606893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.607152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.607183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.607449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.607481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.607678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.607710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.607828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.607859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.608137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.608169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.608354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.608384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.608581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.608624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.608784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.608815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.609037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.609069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.609187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.609208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.609572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.609667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.609896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.609932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.610160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.610195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.610394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.288 [2024-10-14 16:53:32.610427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.288 qpair failed and we were unable to recover it. 00:28:28.288 [2024-10-14 16:53:32.610683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.610717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.610938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.610969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.611092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.611118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.611293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.611314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.611548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.611579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.611730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.611762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.612034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.612064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.612303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.612324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.612500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.612523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.612724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.612750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.612923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.612944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.613065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.613086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.613262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.613283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.613469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.613491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.613686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.613709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.613896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.613928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.614131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.614164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.614441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.614473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.614676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.614710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.614963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.614995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.615188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.615210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.615351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.615373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.615618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.615640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.615818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.615839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.616072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.616093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.616340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.616361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.616553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.616575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.616775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.616797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.616975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.616997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.617225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.617257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.617527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.617558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.617802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.617834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.617972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.618004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.618206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.618227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.618393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.618414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.618681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.618715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.289 [2024-10-14 16:53:32.618886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.289 [2024-10-14 16:53:32.618917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.289 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.619134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.619182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.619449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.619479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.619765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.619798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.619991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.620022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.620271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.620305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.620556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.620587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.620897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.620931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.621192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.621213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.621465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.621487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.621726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.621749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.621992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.622012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.622264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.622286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.622537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.622564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.622827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.622853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.623061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.623082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.623280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.623301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.623566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.623596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.623815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.623838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.624014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.624035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.624265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.624287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.624551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.624582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.624791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.624824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.625024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.625066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.625248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.625269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.625445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.625467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.625697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.625719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.625899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.625921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.626159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.626180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.626370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.626390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.626646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.626669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.626904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.626925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.627153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.627175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.627459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.627480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.290 qpair failed and we were unable to recover it. 00:28:28.290 [2024-10-14 16:53:32.627734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.290 [2024-10-14 16:53:32.627757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.627987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.628008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.628244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.628266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.628492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.628514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.628657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.628679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.628887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.628918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.629045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.629079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.629316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.629347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.629554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.629585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.629852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.629878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.630064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.630086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.630281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.630303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.630482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.630503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.630718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.630743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.630946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.630968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.631085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.631109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.631238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.631259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.631423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.631444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.631649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.631671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.631846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.631867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.632106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.632139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.632419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.632451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.632653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.632686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.632882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.632913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.633112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.633143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.633426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.633457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.633707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.633740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.633885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.633916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.634136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.634168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.634441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.634472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.634690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.634724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.634926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.634948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.635203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.635234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.635490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.635523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.635789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.635812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.635938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.635960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.636139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.636161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.291 qpair failed and we were unable to recover it. 00:28:28.291 [2024-10-14 16:53:32.636419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.291 [2024-10-14 16:53:32.636440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.636617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.636640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.636820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.636851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.637125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.637156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.637381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.637412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.637684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.637717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.638002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.638033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.638226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.638257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.638515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.638546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.638839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.638879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.639174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.639196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.639474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.639495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.639732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.639756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.639893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.639915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.640048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.640070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.640235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.640256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.640480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.640502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.640726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.640749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.640933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.640955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.641223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.641255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.641559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.641591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.641883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.641915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.642165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.642205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.642480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.642502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.642696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.642720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.642849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.642870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.643066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.643088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.643275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.643296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.643495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.643516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.643721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.643743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.644016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.644038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.644242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.644263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.644524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.644555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.644895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.644928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.645185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.645207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.645432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.645454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.645637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.645660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.645763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.645783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.645896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.645918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.646041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.292 [2024-10-14 16:53:32.646063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.292 qpair failed and we were unable to recover it. 00:28:28.292 [2024-10-14 16:53:32.646255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.646276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.646457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.646478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.646663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.646685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.646864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.646894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.647026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.647058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.647402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.647433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.647654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.647687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.647889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.647911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.648114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.648135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.648323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.648349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.648476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.648498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.648673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.648695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.648903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.648926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.649119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.649150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.649416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.649447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.649756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.649788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.649938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.649969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.650123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.650153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.650384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.650415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.650694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.650727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.650930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.650951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.651160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.651182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.651384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.651406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.651649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.651672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.651905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.651927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.652131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.652153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.652398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.652419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.652700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.652722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.652955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.652978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.653094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.293 [2024-10-14 16:53:32.653116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.293 qpair failed and we were unable to recover it. 00:28:28.293 [2024-10-14 16:53:32.653298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.653319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.653501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.653523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.653725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.653748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.653915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.653937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.654124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.654167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.654385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.654416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.654681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.654715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.654911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.654933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.655099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.655121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.655323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.655353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.655552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.655582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.655736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.655766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.656021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.656053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.656203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.656234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.656506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.656536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.656738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.656771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.656996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.657026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.657158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.657197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.657366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.657387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.657560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.657586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.657714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.657737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.657919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.657941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.658065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.658086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.658179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.658200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.658292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.658313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.658427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.658448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.658628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.658651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.658844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.658867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.659075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.659096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.659310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.659332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.659582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.659611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.659789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.659811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.659924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.659944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.660148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.660170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.660425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.660446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.660574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.660595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.660851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.660883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.661016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.661046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.661329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.294 [2024-10-14 16:53:32.661360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.294 qpair failed and we were unable to recover it. 00:28:28.294 [2024-10-14 16:53:32.661641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.661675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.661872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.661903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.662041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.662072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.662303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.662334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.662472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.662503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.662786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.662818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.662970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.663001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.663187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.663217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.663498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.663529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.663744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.663776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.664028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.664059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.664217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.664248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.664552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.664583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.664867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.664900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.665180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.665211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.665501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.665533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.665751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.665784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.665985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.666016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.666267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.666298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.666579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.666622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.666776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.666813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.666979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.667001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.667189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.667220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.667528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.667559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.667953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.667991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.668173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.668203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.668400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.668430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.668688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.668710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.668892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.668915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.669078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.669099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.669214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.669237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.669519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.669540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.295 [2024-10-14 16:53:32.669718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.295 [2024-10-14 16:53:32.669740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.295 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.669923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.669956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.670185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.670216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.670504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.670534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.670846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.670878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.671157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.671189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.671442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.671473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.671597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.671640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.671872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.671894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.672017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.672038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.672167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.672444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.672465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.672645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.672667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.672900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.672922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.673106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.673127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.673325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.673347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.673635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.673658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.673794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.673815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.673992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.674014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.674210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.674246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.674523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.674555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.674844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.674877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.675123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.675144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.675345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.675366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.675631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.675653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.675856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.675877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.676088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.676110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.676309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.676330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.676523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.676549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.676736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.676759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.676937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.676970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.677123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.677153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.677350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.677381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.677703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.677736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.677952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.677982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.678183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.678205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.678421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.678443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.296 [2024-10-14 16:53:32.678719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.296 [2024-10-14 16:53:32.678741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.296 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.678858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.678879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.679011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.679033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.679209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.679232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.679419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.679441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.679632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.679655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.679963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.679985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.680220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.680242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.680500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.680522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.680702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.680725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.680837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.680860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.681056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.681078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.681258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.681279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.681470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.681491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.681615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.681637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.681887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.681909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.682089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.682112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.682300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.682321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.682523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.682545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.682803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.682827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.682943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.682964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.683064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.683085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.683342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.683362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.683639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.683661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.683819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.683840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.684023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.684044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.684325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.684355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.684570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.684609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.684805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.684836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.685042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.685073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.685228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.685270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.685464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.685491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.685700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.685723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.685856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.685877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.685994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.686015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.686197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.686219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.297 qpair failed and we were unable to recover it. 00:28:28.297 [2024-10-14 16:53:32.686449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.297 [2024-10-14 16:53:32.686471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.686722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.686745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.686933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.686955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.687083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.687105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.687237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.687257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.687525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.687547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.687689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.687712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.687956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.687986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.688226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.688257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.688538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.688569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.688786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.688820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.689072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.689103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.689360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.689383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.689622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.689645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.689762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.689784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.689914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.689935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.690040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.690060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.690242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.690263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.690506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.690526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.690753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.690777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.691015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.691036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.691218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.691240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.691474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.691496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.691667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.691689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.691868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.691889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.691990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.692011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.692196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.692218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.692463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.692485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.692657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.692679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.692913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.692935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.693051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.693073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.693273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.693294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.693497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.298 [2024-10-14 16:53:32.693518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.298 qpair failed and we were unable to recover it. 00:28:28.298 [2024-10-14 16:53:32.693684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.693706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.693928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.693959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.694244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.694280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.694539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.694569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.694798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.694831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.695100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.695133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.695427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.695449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.695681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.695704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.695884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.695905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.696161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.696182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.696456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.696486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.696754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.696787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.696926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.696948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.697235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.697265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.697539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.697571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.697811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.697844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.698056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.698086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.698358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.698380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.698631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.698653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.698958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.698981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.699239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.699260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.699567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.699588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.699807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.699828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.699956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.699977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.700234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.700255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.700439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.700460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.700744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.700767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.700884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.700905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.701102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.701133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.701352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.701385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.701668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.701702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.701902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.701932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.702081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.702112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.702334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.702364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.702582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.702622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.299 [2024-10-14 16:53:32.702901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.299 [2024-10-14 16:53:32.702933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.299 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.703238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.703260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.703445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.703466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.703674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.703696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.703819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.703840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.704072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.704093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.704313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.704334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.704509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.704535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.704638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.704660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.704796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.704818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.705049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.705071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.705367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.705389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.705592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.705623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.705855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.705876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.706059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.706080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.706305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.706326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.706535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.706555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.706736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.706759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.706867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.706888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.707019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.707040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.707166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.707187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.707306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.707328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.707524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.707545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.707782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.707804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.707990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.708012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.708296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.708317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.708502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.708522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.708726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.708749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.708933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.708954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.709133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.709155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.709350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.709372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.709536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.709557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.709834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.709866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.300 [2024-10-14 16:53:32.710104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.300 [2024-10-14 16:53:32.710135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.300 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.710445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.710468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.710732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.710755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.710930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.710951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.711188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.711210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.711451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.711472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.711711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.711734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.711916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.711938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.712122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.712144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.712413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.712433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.712662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.712686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.712890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.712912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.713112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.713134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.713328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.713350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.713529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.713555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.713773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.713796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.713929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.713952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.714187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.714209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.714418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.714440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.714679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.714702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.714959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.714981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.715161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.715182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.715360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.715382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.715624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.715657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.715884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.715915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.716109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.716140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.716385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.716415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.716624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.716657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.716917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.716949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.717235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.717266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.717516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.717547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.717792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.717825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.718076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.718106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.718236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.301 [2024-10-14 16:53:32.718257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.301 qpair failed and we were unable to recover it. 00:28:28.301 [2024-10-14 16:53:32.718441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.718482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.718621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.718654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.718808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.718839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.718996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.719018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.719226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.719257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.719452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.719483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.719691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.719724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.719933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.719955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.720250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.720271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.720459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.720480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.720722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.720744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.720958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.720980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.721161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.721182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.721417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.721448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.721700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.721733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.721876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.721907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.722048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.722079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.722283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.722325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.722498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.722518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.722695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.722717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.722932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.722958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.723195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.723217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.723324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.723345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.723585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.723615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.723752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.723773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.724006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.724027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.724273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.724294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.724409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.724431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.724687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.724711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.724878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.724899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.725130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.725152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.725373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.725394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.725598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.725627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.725860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.725881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.302 [2024-10-14 16:53:32.725998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.302 [2024-10-14 16:53:32.726020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.302 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.726350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.726371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.726630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.726652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.726831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.726854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.727136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.727158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.727444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.727465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.727737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.727760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.728065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.728086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.728219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.728240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.728473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.728494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.728672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.728695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.728876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.728898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.729071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.729093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.729225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.729246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.729372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.729393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.729650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.729672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.729845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.729867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.730046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.730067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.730187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.730209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.730451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.730473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.730657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.730679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.730862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.730885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.731042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.731064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.731182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.731203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.731469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.731491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.731677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.731700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.731948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.731973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.732158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.732180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.732443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.732465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.732724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.732746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.732979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.733001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.733227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.733250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.733487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.303 [2024-10-14 16:53:32.733509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.303 qpair failed and we were unable to recover it. 00:28:28.303 [2024-10-14 16:53:32.733776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.733799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.734010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.734031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.734273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.734295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.734524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.734546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.734772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.734795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.734970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.734992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.735273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.735295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.735534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.735555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.735660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.735681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.735853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.735873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.738912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.738936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.739076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.739096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.739373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.739394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.739636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.739659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.739865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.739886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.740077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.740098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.740217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.740238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.740498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.740519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.740777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.740801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.741022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.741044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.741232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.741254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.741454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.741475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.741772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.741795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.741981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.742003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.742122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.742143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.742334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.742355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.742518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.742539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.742772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.742793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.743026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.743048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.743342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.743364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.743648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.743670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.743842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.743863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.744038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.744060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.744173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.744199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.744429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.744450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.744684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.744706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.744965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.744988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.304 [2024-10-14 16:53:32.745220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.304 [2024-10-14 16:53:32.745241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.304 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.745447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.745468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.745650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.745672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.745854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.745876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.746050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.746071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.746351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.746373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.746611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.746633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.746805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.746827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.747096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.747118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.747351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.747372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.747650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.747673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.747893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.747914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.748091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.748112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.748292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.748314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.748486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.748507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.748628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.748651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.748888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.748910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.749097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.749119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.749345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.749366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.749618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.749641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.749869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.749891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.749992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.750012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.750293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.750314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.750452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.750475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.750726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.750748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.750955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.750976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.751152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.751175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.751411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.751432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.751675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.751698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.751805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.751827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.752007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.752028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.752138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.752159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.752356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.752378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.752631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.752656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.752844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.752866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.753122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.753144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.753243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.753265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.305 qpair failed and we were unable to recover it. 00:28:28.305 [2024-10-14 16:53:32.753467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.305 [2024-10-14 16:53:32.753489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.753698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.753721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.753890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.753911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.754046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.754068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.754211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.754232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.754403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.754424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.754654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.754677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.754808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.754829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.755013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.755035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.755170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.755192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.755448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.755469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.755670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.755693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.755819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.755840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.756032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.756054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.756303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.756324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.756610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.756632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.756766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.756788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.756984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.757005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.757135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.757156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.757381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.757403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.757614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.757637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.757818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.757839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.758079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.758101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.758221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.758242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.758492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.758513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.758710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.758732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.758964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.758990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.759121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.759143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.759430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.759452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.759736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.759758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.759892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.759914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.760092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.760113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.760320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.760341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.760544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.760565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.760802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.760824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.760954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.760975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.761107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.761128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.761347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.761370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.306 [2024-10-14 16:53:32.761611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.306 [2024-10-14 16:53:32.761634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.306 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.761758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.761779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.761911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.761934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.762167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.762189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.762296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.762317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.762548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.762570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.762693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.762716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.762949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.762969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.763146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.763168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.763477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.763500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.763702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.763725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.763907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.763928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.764124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.764145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.764326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.764349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.764578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.764607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.764797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.764820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.764932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.764954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.765232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.765253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.765487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.765509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.765773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.765796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.765982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.766003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.766266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.766288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.766542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.766563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.766744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.766766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.766998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.767018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.767265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.767288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.767563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.767585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.767725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.767747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.767883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.767908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.768098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.768119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.768382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.768403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.768583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.768623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.768799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.768819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.307 qpair failed and we were unable to recover it. 00:28:28.307 [2024-10-14 16:53:32.768954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.307 [2024-10-14 16:53:32.768975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.769224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.769245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.769364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.769385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.769671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.769695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.769927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.769949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.770188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.770209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.770439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.770460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.770739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.770762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.771011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.771032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.771294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.771316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.771568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.771590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.771874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.771901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.772086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.772107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.772387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.772409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.772530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.772551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.772729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.772752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.772965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.772986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.773157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.773180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.773435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.773457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.773586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.773618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.773807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.773828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.774089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.774111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.774322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.774344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.774573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.774594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.774788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.774810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.774996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.775018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.775193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.775214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.775397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.775420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.775688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.775712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.775969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.775991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.776171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.776193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.776388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.776409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.776641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.776664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.776873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.776896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.777165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.777186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.777407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.308 [2024-10-14 16:53:32.777435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.308 qpair failed and we were unable to recover it. 00:28:28.308 [2024-10-14 16:53:32.777629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.777651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.777850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.777871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.778159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.778180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.778433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.778455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.778565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.778586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.778822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.778845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.779079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.779103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.779299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.779320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.779526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.779547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.779732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.779756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.779942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.779963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.780064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.780086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.780389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.780411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.780686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.780708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.780915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.780936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.781114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.781136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.781328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.781350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.781520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.781544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.781783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.781807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.781990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.782012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.782317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.782339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.782615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.782637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.782824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.782845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.783103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.783124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.783297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.783318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.783593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.783627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.783864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.783887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.784070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.784091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.784393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.784415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.784676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.784700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.784829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.784850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.785109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.785130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.785424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.785445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.785724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.785746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.785979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.309 [2024-10-14 16:53:32.786000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.309 qpair failed and we were unable to recover it. 00:28:28.309 [2024-10-14 16:53:32.786233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.786253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.786519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.786540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.786724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.786746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.787017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.787039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.787261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.787288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.787402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.787424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.787730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.787752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.787913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.787935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.788192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.788214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.788431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.788452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.788736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.788758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.788944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.788965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.789196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.789217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.789453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.789474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.789736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.789758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.789934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.789956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.790091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.790113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.790466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.790488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.790676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.790699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.790895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.790917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.791041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.791063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.791182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.791203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.791388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.791410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.791571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.791593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.791788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.791810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.792041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.792063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.792290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.792312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.792545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.792566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.792813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.792836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.792966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.792987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.793090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.793111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.793376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.793399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.793652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.793675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.793854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.793876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.794009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.794030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.794199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.794221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.794454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.310 [2024-10-14 16:53:32.794476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.310 qpair failed and we were unable to recover it. 00:28:28.310 [2024-10-14 16:53:32.794649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.794672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.794919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.794940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.795191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.795213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.795472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.795495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.795728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.795751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.795885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.795906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.796088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.796111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.796362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.796391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.796592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.796622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.796882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.796903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.797022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.797044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.797154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.797176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.797472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.797495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.797668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.797692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.797874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.797896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.798080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.798103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.798364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.798385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.798509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.798531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.798820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.798842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.799023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.799045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.799224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.799245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.799523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.799545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.799819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.799842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.800024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.800045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.800241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.800263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.800449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.800471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.800697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.800720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.800978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.800999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.801273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.801296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.801534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.801556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.801733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.801756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.801895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.801916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.802122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.802143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.802332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.802354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.802524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.802545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.311 qpair failed and we were unable to recover it. 00:28:28.311 [2024-10-14 16:53:32.802812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.311 [2024-10-14 16:53:32.802836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.803056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.803079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.803281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.803302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.803501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.803523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.803751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.803774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.804006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.804027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.804149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.804170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.804275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.804296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.804411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.804433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.804687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.804710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.804847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.804869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.805051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.805073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.805302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.805328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.805506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.805527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.805769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.805791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.806044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.806067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.806471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.806493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.806720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.806743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.806928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.806950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.807095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.807116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.807234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.807256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.807427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.807449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.807730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.807752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.807930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.807952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.808076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.808098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.808356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.808378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.808587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.808616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.808781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.808802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.808976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.808998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.809196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.809218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.809341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.809362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.809615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.809638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.809847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.809871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.810101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.810122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.810302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.312 [2024-10-14 16:53:32.810323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.312 qpair failed and we were unable to recover it. 00:28:28.312 [2024-10-14 16:53:32.810508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.810531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.810727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.810750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.810979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.811000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.811127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.811149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.811417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.811438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.811568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.811590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.811782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.811804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.811988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.812009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.812154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.812175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.812370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.812391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.812579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.812611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.812798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.812819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.812993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.813015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.813317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.813339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.813514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.813536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.813717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.813740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.813926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.813947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.814060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.814086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.814221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.814243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.814415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.814437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.814634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.814656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.814824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.814845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.815026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.815047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.815190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.815212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.815406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.815429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.815624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.815648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.815882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.815903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.816134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.816155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.816419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.816440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.816680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.816703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.816934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.816956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.817143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.817166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.817488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.817509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.817775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.817798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.818023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.818045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.818228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.818249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.818498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.818519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.818773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.818795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.818923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.313 [2024-10-14 16:53:32.818944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.313 qpair failed and we were unable to recover it. 00:28:28.313 [2024-10-14 16:53:32.819063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.819085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.819366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.819387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.819642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.819665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.819846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.819867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.819999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.820021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.820212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.820252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.820490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.820506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.820682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.820698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.820862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.820887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.821057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.821079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.821342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.821367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.821632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.821668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.821871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.821892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.822086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.822106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.822414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.822435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.822614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.822634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.822754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.822774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.822903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.822931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.823119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.823158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.823488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.823519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.823741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.823773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.823973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.824005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.824138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.824167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.824359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.824390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.824573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.824611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.824884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.824915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.825105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.825138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.825438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.825468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.825663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.825696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.825920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.825948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.826150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.826182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.826460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.826490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.826719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.826751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.826969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.826999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.827161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.827192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.827441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.827471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.827721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.827754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.827905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.827935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.828134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.828162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.828439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.828472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.314 qpair failed and we were unable to recover it. 00:28:28.314 [2024-10-14 16:53:32.828679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.314 [2024-10-14 16:53:32.828710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.828901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.828931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.829129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.829158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.829422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.829455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.829673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.829705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.830036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.830113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.830456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.830493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.830807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.830843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.831073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.831106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.831262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.831296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.831550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.831583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.831780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.831815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.832069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.832101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.832303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.832335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.832545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.832578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.832870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.832902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.833105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.833137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.833436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.833474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.833682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.833724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.833917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.833942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.834154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.834178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.834400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.834422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.834593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.834619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.834844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.834862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.835040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.835055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.835298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.835314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.835541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.835556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.835763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.835790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.836004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.836028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.836156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.836180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.836309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.836333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.836609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.836632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.836769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.836786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.836944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.836959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.837124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.837142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.837304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.837321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.837575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.837591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.837846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.315 [2024-10-14 16:53:32.837863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.315 qpair failed and we were unable to recover it. 00:28:28.315 [2024-10-14 16:53:32.837974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.837994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.838178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.838202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.838424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.838448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.838696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.838723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.838859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.838884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.839010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.839027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.839265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.839284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.839550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.839643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.839953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.840010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.840160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.840185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.840498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.840521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.840716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.840741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.840872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.840894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.841133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.841155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.841356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.841379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.841560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.841582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.841773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.841798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.841971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.841992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.842193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.842215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.842508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.842530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.842720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.842751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.842926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.842951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.843191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.843213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.843388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.843411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.843514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.843536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.843728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.843752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.843937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.843959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.844199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.844222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.844440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.844463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.844654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.844678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.844811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.844834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.844935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.844959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.845115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.845137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.845312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.845334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.845597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.845630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.845811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.316 [2024-10-14 16:53:32.845834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.316 qpair failed and we were unable to recover it. 00:28:28.316 [2024-10-14 16:53:32.845969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.845992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.846204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.846226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.846459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.846482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.846662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.846686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.846897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.846919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.847039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.847062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.847316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.847338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.847526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.847549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.847836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.847859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.848043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.848066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.848275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.848297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.848479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.848503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.848709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.848733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.848910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.848931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.849100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.849123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.849336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.849360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.849475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.849497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.849752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.849776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.849963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.849985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.850165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.850187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.850309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.850332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.850565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.850587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.850793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.850818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.851108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.851130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.851427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.851454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.851637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.851662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.851838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.851860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.852107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.852130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.852384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.852406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.852671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.852695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.852902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.852923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.853153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.853176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.853350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.853373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.317 [2024-10-14 16:53:32.853609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.317 [2024-10-14 16:53:32.853632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.317 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.853809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.853832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.854064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.854087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.854269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.854291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.854550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.854572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.854787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.854810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.854937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.854960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.855054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.855077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.855193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.855215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.855352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.855375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.855553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.855575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.855824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.855848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.855963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.855985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.856106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.856128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.856303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.856325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.856520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.856542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.856798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.856821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.856931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.856953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.857142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.857166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.857345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.857368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.857618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.857642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.857811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.857834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.858001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.858024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.858153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.858176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.858377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.858400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.858657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.858680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.858880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.858903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.859001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.859023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.859204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.859226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.859410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.859433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.859550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.859572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.859812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.859845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.859984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.860006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.860121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.860144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.860254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.860276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.860504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.860526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.860768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.860792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.860959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.318 [2024-10-14 16:53:32.860981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.318 qpair failed and we were unable to recover it. 00:28:28.318 [2024-10-14 16:53:32.861089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.861111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.861233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.861255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.861374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.861396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.861497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.861519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.861696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.861719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.861889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.861911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.862144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.862167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.862406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.862429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.862618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.862642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.862745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.862767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.862883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.862905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.863840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.863862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.864020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.864043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.864211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.864233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.864407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.864433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.864536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.864558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.864672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.864695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.864896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.864919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.865824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.865845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.866932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.866954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.867065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.867087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.867185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.867206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.867302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.867325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.867492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.867514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.867609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.867631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.319 qpair failed and we were unable to recover it. 00:28:28.319 [2024-10-14 16:53:32.867737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.319 [2024-10-14 16:53:32.867759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.867926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.867949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.868081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.868103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.868208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.868230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.868394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.868416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.868649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.868673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.868764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.868785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.869070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.869093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.869210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.869233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.869330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.869353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.869512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.869534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.869706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.869730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.869844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.869867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.870032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.870055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.870157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.870179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.870371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.870392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.870503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.870526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.870698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.870727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.870899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.870920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.871158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.871180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.871413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.871436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.871620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.871644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.871751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.871774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.871969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.871991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.872211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.872232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.872393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.872415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.872664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.872688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.872866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.872888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.873092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.873217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.873371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.873648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.873774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.873893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.873981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.874003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.874241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.874262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.874364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.874386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.874487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.874508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.874624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.874647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.874878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.874900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.875133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.875155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.875404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.875425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.875525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.875547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.875746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.320 [2024-10-14 16:53:32.875769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.320 qpair failed and we were unable to recover it. 00:28:28.320 [2024-10-14 16:53:32.875902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.875924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.876088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.876110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.876234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.876255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.876458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.876481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.876712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.876735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.876914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.876936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.877050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.877071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.877158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.877180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.877291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.877312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.877479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.877501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.877683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.877706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.877877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.877900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.878894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.878916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.879143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.879164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.879327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.879348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.879533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.879555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.879646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.879669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.879831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.879851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.879939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.879961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.880135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.880158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.880392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.880414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.880528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.880549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.880790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.880812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.880931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.880952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.881888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.881984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.882006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.882174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.882195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.882278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.882300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.321 [2024-10-14 16:53:32.882403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.321 [2024-10-14 16:53:32.882424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.321 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.882523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.882545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.882639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.882666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.882826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.882848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.882940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.882961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.883123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.883144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.883264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.883286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.883444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.883465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.883619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.883643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.883742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.883764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.883917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.883939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.884119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.884319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.884433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.884551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.884681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.884885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.884996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.885101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.885297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.885439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.885573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.885705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.885897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.885919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.886936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.886958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.887118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.887140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.887310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.887331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.887498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.887519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.887613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.887635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.887738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.887760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.887931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.887953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.888167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.888189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.888307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.888329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.888419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.888441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.888558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.888579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.888761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.888783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.888979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.889001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.889087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.889108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.889198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.889219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.889340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.889361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.889550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.322 [2024-10-14 16:53:32.889572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.322 qpair failed and we were unable to recover it. 00:28:28.322 [2024-10-14 16:53:32.889756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.889778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.889943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.889965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.890970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.890991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.891145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.891166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.891382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.891405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.891577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.891598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.891707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.891729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.891970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.891992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.892234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.892255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.892354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.892376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.892481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.892503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.892616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.892639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.892753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.892775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.892887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.892909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.893924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.893945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.894929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.894951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.895042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.895063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.895239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.895261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.895500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.895521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.895714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.895737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.895915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.895938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.896161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.896183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.896425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.896447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.896625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.896648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.896770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.896792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.896963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.896984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.897152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.323 [2024-10-14 16:53:32.897173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.323 qpair failed and we were unable to recover it. 00:28:28.323 [2024-10-14 16:53:32.897340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.897366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.897661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.897684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.897858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.897879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.898011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.898032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.898374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.898396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.898611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.898633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.898791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.898812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.899003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.899024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.899142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.899164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.899405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.899427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.899652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.899674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.899946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.899968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.900198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.900220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.900497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.900519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.900732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.900755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.900926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.900948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.901122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.901143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.901330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.901351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.901586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.901616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.901861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.901883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.902009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.902031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.324 [2024-10-14 16:53:32.902126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.324 [2024-10-14 16:53:32.902147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.324 qpair failed and we were unable to recover it. 00:28:28.608 [2024-10-14 16:53:32.902405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.608 [2024-10-14 16:53:32.902428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.608 qpair failed and we were unable to recover it. 00:28:28.608 [2024-10-14 16:53:32.902722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.902747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.902865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.902887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.903060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.903081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.903346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.903368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.903466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.903488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.903721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.903744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.903927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.903949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.904070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.904093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.904350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.904371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.904554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.904576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.904754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.904775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.904944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.904965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.905053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.905074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.905208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.905230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.905408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.905430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.905673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.905696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.905823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.905844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.905974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.905999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.906159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.906180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.906449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.906470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.906746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.906769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.906895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.906916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.907093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.907115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.907270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.907292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.907458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.907480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.907668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.907691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.907856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.907877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.908070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.908092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.908232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.908252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.908485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.908506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.908634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.908657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.908776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.908797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.908927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.908948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.909221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.909242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.909473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.909495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.909665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.909689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.909854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.909875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.609 qpair failed and we were unable to recover it. 00:28:28.609 [2024-10-14 16:53:32.909994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.609 [2024-10-14 16:53:32.910016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.910189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.910210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.910382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.910403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.910599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.910641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.910892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.910913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.911138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.911161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.911389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.911411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.911529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.911551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.911746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.911769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.911975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.911997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.912221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.912242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.912425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.912447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.912699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.912721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.912895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.912916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.913039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.913061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.913260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.913282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.913525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.913547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.913741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.913764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.913924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.913945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.914223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.914244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.914469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.914495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.914678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.914701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.914940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.914962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.915128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.915150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.915269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.915291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.915483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.915504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.915632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.915655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.915906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.915928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.916035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.916056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.916157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.916178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.916295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.916316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.916502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.916523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.916752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.916775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.916958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.916979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.917189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.917212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.917332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.610 [2024-10-14 16:53:32.917354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.610 qpair failed and we were unable to recover it. 00:28:28.610 [2024-10-14 16:53:32.917599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.917630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.917818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.917841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.918092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.918114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.918305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.918327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.918463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.918484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.918708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.918730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.918909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.918930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.919038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.919060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.919222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.919244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.919437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.919458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.919664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.919687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.919872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.919894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.920055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.920077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.920347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.920368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.920488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.920509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.920750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.920773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.921022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.921043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.921212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.921234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.921481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.921503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.921687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.921709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.921936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.921958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.922187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.922208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.922436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.922459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.922636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.922658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.922852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.922879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.923060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.923082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.923398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.923421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.923675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.923699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.923947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.923969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.924148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.924170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.924401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.924433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.924641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.924674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.924946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.924978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.925174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.925206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.925401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.611 [2024-10-14 16:53:32.925433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.611 qpair failed and we were unable to recover it. 00:28:28.611 [2024-10-14 16:53:32.925642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.925665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.925844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.925866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.926120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.926142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.926407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.926428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.926622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.926645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.926764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.926786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.926949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.926971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.927131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.927178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.927428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.927459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.927686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.927719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.927969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.927991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.928198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.928220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.928463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.928484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.928692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.928715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.928919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.928940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.929057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.929080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.929374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.929397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.929658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.929681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.929881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.929903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.930157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.930179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.930413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.930435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.930677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.930699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.930954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.930977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.931159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.931182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.931374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.931394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.931619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.931653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.931849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.931881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.932036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.932068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.932295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.932327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.932580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.932630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.932840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.932871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.933081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.933113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.933387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.933418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.933565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.933588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.933849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.933872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.934008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.934029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.934308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.934330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.934511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.934533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.934721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.934744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.612 [2024-10-14 16:53:32.934941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.612 [2024-10-14 16:53:32.934964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.612 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.935218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.935240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.935340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.935362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.935614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.935636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.935824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.935846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.936026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.936048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.936274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.936297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.936546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.936568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.936689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.936712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.936945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.936967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.937148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.937170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.937435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.937456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.937629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.937652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.937837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.937859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.938046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.938068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.938328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.938350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.938613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.938636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.938843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.938866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.939100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.939122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.939313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.939334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.939585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.939616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.939744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.939767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.939903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.939925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.940094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.940116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.940359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.940380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.940619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.940642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.940769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.940791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.940881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.940901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.941025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.941045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.941169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.941190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.941442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.941468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.941646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.941669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.941854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.941875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.941997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.613 [2024-10-14 16:53:32.942019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.613 qpair failed and we were unable to recover it. 00:28:28.613 [2024-10-14 16:53:32.942198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.942219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.942319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.942340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.942642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.942676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.942806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.942838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.943031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.943063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.943195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.943226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.943462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.943493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.943709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.943732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.943905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.943927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.944128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.944159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.944361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.944393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.944550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.944581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.944832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.944855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.945047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.945068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.945188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.945209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.945376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.945398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.945668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.945691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.945878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.945900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.946033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.946055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.946268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.946289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.946543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.946565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.946781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.946804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.947036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.947058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.947168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.947190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.947371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.947393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.947567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.947588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.947723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.947746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.948006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.948028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.948137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.948158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.948437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.948468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.948746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.948779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.948989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.949012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.949134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.949155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.949282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.949303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.949426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.949447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.949681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.949704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.949884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.949920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.950123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.950154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.614 [2024-10-14 16:53:32.950271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.614 [2024-10-14 16:53:32.950302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.614 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.950514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.950545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.950907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.950941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.951223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.951253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.951471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.951502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.951730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.951764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.952020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.952050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.952300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.952331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.952550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.952581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.952868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.952901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.953051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.953082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.953367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.953399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.953687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.953721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.953912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.953933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.954197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.954229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.954434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.954464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.954723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.954745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.954999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.955020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.955277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.955298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.955557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.955578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.955826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.955848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.956037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.956058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.956252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.956274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.956382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.956403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.956654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.956677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.956846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.956868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.957054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.957075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.957409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.957430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.957686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.957709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.957904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.957925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.958121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.958142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.958348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.958370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.958493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.958515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.958673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.958696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.958871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.958893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.959078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.959100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.959314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.615 [2024-10-14 16:53:32.959344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.615 qpair failed and we were unable to recover it. 00:28:28.615 [2024-10-14 16:53:32.959642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.959676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.959846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.959884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.960144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.960175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.960388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.960419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.960682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.960715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.960934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.960966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.961175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.961206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.961481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.961513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.961726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.961759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.961923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.961954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.962177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.962209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.962411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.962442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.962591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.962634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.962866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.962898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.963068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.963089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.963287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.963319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.963545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.963575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.963800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.963833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.963983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.964005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.964213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.964244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.964470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.964501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.964725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.964758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.964939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.964962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.965085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.965106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.965211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.965232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.965427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.965457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.965740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.965773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.965923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.965954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.966166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.966243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.966543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.966579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.966800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.966834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.967043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.967077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.967239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.967272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.967465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.967499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 699384 Killed "${NVMF_APP[@]}" "$@" 00:28:28.616 [2024-10-14 16:53:32.967786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.616 [2024-10-14 16:53:32.967822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.616 qpair failed and we were unable to recover it. 00:28:28.616 [2024-10-14 16:53:32.968029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.968061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.968205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.968237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:28.617 [2024-10-14 16:53:32.968490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.968524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.968744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.968778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:28.617 [2024-10-14 16:53:32.969001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.969036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.969189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:28.617 [2024-10-14 16:53:32.969216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.969489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:28.617 [2024-10-14 16:53:32.969523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.969788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.617 [2024-10-14 16:53:32.969823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.969977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.969999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.970178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.970209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.970425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.970457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.970715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.970749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.970960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.970985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.971115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.971137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.971950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.971985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.972189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.972210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.972388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.972411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.972644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.972674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.972918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.972941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.973126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.973147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.973434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.973459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.973653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.973675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.973908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.973932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.974113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.974136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.974340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.974362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.974474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.974495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.974700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.974723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.974858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.974880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.975009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.975031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.975247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.975269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.975467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.975489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.975673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.975696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.975813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.617 [2024-10-14 16:53:32.975834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.617 qpair failed and we were unable to recover it. 00:28:28.617 [2024-10-14 16:53:32.976039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.976061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.976174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.976196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.976404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.976426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.976554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.976576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.976763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.976787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.976898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.976922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=700111 00:28:28.618 [2024-10-14 16:53:32.977166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.977191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 700111 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:28.618 [2024-10-14 16:53:32.977426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.977452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.977633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.977657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 700111 ']' 00:28:28.618 [2024-10-14 16:53:32.977841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.977867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.978046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.978069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.618 [2024-10-14 16:53:32.978202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.978225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:28.618 [2024-10-14 16:53:32.978416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.978441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.978672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.978700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.618 [2024-10-14 16:53:32.978952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:28.618 [2024-10-14 16:53:32.979029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 16:53:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.618 [2024-10-14 16:53:32.979300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.979339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.979490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.979528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.979733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.979769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.979990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.980021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.980292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.980336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.980544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.980578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.980734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.980766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.980909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.980941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.981072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.981104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.981260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.981291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.981514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.981550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.981803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.618 [2024-10-14 16:53:32.981837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.618 qpair failed and we were unable to recover it. 00:28:28.618 [2024-10-14 16:53:32.981987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.982021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.982183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.982215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.982498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.982533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.982696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.982730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.982880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.982913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.983058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.983090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.983333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.983371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.983661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.983696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.983850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.983882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.984026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.984058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.984303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.984336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.984489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.984521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.984717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.984754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.984887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.984920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.985059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.985091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.985375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.985407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.985688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.985721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.985977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.986010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.986166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.986197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.986509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.986542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.986815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.986841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.987030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.987054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.987174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.987197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.987444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.987468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.987696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.987719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.987840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.987861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.988047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.988070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.988316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.988339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.988631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.988654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.988820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.988846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.988976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.989000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.989183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.989206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.989409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.989430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.989589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.989623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.989859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.989885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.990082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.990106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.990401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.990422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.990717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.990740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.990923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.990949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.991197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.619 [2024-10-14 16:53:32.991219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.619 qpair failed and we were unable to recover it. 00:28:28.619 [2024-10-14 16:53:32.991463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.991486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.991730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.991753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.991943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.991965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.992190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.992213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.992401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.992423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.992621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.992643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.992828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.992850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.992969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.992990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.993122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.993148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.993440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.993464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.993591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.993624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.993869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.993891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.994019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.994044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.994245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.994268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.994501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.994523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.994698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.994722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.994902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.994923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.995120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.995142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.995328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.995353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.995533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.995559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.995763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.995787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.995979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.996001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.996202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.996224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.996486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.996508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.996682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.996704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.996813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.996835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.997016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.997039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.997212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.997235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.997416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.997439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.997619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.997646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.997746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.997779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.997988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.998010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.998205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.998227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.998345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.998368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.998468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.998490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.998725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.998749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.998980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.999002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.999232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.999254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.999443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.620 [2024-10-14 16:53:32.999465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.620 qpair failed and we were unable to recover it. 00:28:28.620 [2024-10-14 16:53:32.999582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:32.999611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:32.999709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:32.999731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:32.999830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:32.999850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:32.999977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.000094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.000274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.000479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.000692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.000826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.000961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.000983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.001105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.001127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.001301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.001323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.001448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.001471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.001659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.001684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.001798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.001820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.001926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.001948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.002035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.002055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.002257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.002279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.002458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.002480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.002611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.002635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.002754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.002780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.002888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.002909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.003940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.003960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.004952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.004975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.005090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.005112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.005349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.621 [2024-10-14 16:53:33.005371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.621 qpair failed and we were unable to recover it. 00:28:28.621 [2024-10-14 16:53:33.005470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.005491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.005591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.005634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.005824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.005846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.005958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.005980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.006191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.006213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.006370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.006392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.006627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.006650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.006752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.006774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.006953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.006976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.007947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.007968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.008152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.008173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.008283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.008305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.008401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.008423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.008704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.008728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.008912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.008935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.009118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.009140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.009232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.009266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.009383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.009404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.009512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.009534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.009734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.009757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.009992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.010016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.010124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.010147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.010354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.010376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.010557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.010579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.010730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.010773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.010964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.010983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.011073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.011098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.011272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.011296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.622 [2024-10-14 16:53:33.011402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.622 [2024-10-14 16:53:33.011425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.622 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.011528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.011554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.011740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.011766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.011954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.011977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.012158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.012174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.012281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.012297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.012470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.012486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.012574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.012588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.012797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.012814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.013081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.013102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.013262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.013278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.013372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.013397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.013496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.013519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.013653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.013679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.013873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.013897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.014093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.014118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.014228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.014253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.014485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.014508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.014598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.014627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.014728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.014751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.014860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.014879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.015005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.015024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.015117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.015137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.015244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.015265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.015546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.015566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.015699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.015731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.015843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.015874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.016068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.016097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.016308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.016347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.016497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.016527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.016649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.016680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.016867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.016898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.017018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.017047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.017302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.017333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.017453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.017482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.017697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.017727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.623 [2024-10-14 16:53:33.017923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.623 [2024-10-14 16:53:33.017951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.623 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.018162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.018193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.018384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.018413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.018533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.018562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.018678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.018707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.018831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.018860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.019047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.019078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.019360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.019392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.019667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.019699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.019842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.019872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.020003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.020032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.020149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.020178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.020369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.020400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.020587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.020625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.020875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.020907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.021130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.021160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.021282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.021311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.021419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.021447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.021635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.021665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.021816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.021844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.021977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.022007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.022209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.022238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.022365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.022393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.022513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.022541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.022677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.022708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.022840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.022867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.022992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.023021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.023221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.023249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.023427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.023457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.023589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.023628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.023814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.023844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.023960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.023989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.024159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.024196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.024388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.024417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.024589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.024620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.024805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.024832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.024961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.024984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.025147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.025171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.624 qpair failed and we were unable to recover it. 00:28:28.624 [2024-10-14 16:53:33.025268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.624 [2024-10-14 16:53:33.025293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.025411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.025433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.025628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.025652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.025749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.025764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.025879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.025895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.026978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.026992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.027069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.027083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.027176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.027200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.027305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.027330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.027517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.027539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.027653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.027676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.027781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.027805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.028922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.028937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.029951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.029980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.030070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.030093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.030191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.030213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.030300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.030323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.625 qpair failed and we were unable to recover it. 00:28:28.625 [2024-10-14 16:53:33.030516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.625 [2024-10-14 16:53:33.030536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.030624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.030641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.030810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.030826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.030884] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:28:28.626 [2024-10-14 16:53:33.030951] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.626 [2024-10-14 16:53:33.030986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.031944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.031965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.032158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.032184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.032409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.032436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.032561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.032584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.032696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.032731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.032829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.032847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.033892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.033908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.034949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.034974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.035099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.035124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.035215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.035240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.035346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.035365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.035482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.035502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.035668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.035745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.035966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.036004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.036262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.036295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.626 [2024-10-14 16:53:33.036408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.626 [2024-10-14 16:53:33.036439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.626 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.036630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.036662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.036843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.036876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.037010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.037042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.037234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.037267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.037448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.037479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.037677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.037710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.037928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.037959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.038094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.038126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.038308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.038340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.038522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.038555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.038699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.038731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.038920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.038952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.039088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.039121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.039243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.039275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.039519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.039559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.039701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.039734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.039931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.039963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.040094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.040126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.040352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.040384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.040522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.040554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.040695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.040727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.040860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.040894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.041085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.041119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.041242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.041281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.041473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.041506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.041632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.041667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.041792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.041823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.042034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.042068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.042314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.042346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.042460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.042492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.042617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.042649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.042827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.042859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.043048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.043081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.043204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.627 [2024-10-14 16:53:33.043236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.627 qpair failed and we were unable to recover it. 00:28:28.627 [2024-10-14 16:53:33.043367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.043399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.043527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.043559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.043682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.043717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.043842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.043874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.044010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.044043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.044230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.044263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.044440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.044471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.044592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.044634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.044759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.044791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.044909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.044940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.045967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.045987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.046973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.046989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.047126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.047143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.047282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.047297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.047449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.047465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.047538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.047552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.047647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.047663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.047832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.047847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.048038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.048158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.048256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.048365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.048456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.628 [2024-10-14 16:53:33.048568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.628 qpair failed and we were unable to recover it. 00:28:28.628 [2024-10-14 16:53:33.048655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.048671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.048757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.048772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.048850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.048864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.049908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.049922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.050911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.050925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.051840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.051985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.052937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.052950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.053021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.053034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.629 [2024-10-14 16:53:33.053177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.629 [2024-10-14 16:53:33.053192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.629 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.053275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.053365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.053470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.053630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.053811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.053905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.053987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.054900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.054913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.055904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.055923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.056975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.056995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.057097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.057115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.057267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.057286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.057370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.057389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.057492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.057513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.057614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.057634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.630 qpair failed and we were unable to recover it. 00:28:28.630 [2024-10-14 16:53:33.057802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.630 [2024-10-14 16:53:33.057823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.057982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.058941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.058966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.059958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.059980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.060066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.060084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.060312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.060332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.060486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.060505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.060676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.060697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.060892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.060923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.061068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.061101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.061346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.061385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.061652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.061685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.061863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.061896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.062045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.062065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.062308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.062327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.062494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.062514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.062613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.062634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.062721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.062741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.062846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.062865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.063110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.063130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.063226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.063245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.063415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.631 [2024-10-14 16:53:33.063435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.631 qpair failed and we were unable to recover it. 00:28:28.631 [2024-10-14 16:53:33.063633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.063653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.063759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.063785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.063888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.063907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.064091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.064122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.064296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.064328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.064540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.064572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.064698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.064729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.064914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.064947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.065191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.065210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.065371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.065390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.065542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.065561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.065752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.065778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.065893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.065914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.066138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.066159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.066270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.066293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.066403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.066425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.066595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.066623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.066802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.066831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.067015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.067036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.067207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.067228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.067407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.067429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.067546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.067567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.067754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.067777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.067936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.067957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.068044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.068064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.068223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.068244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.068427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.068459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.068631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.068665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.068803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.068841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.069124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.069156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.069334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.069364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.069536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.069568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.069830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.069870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.070046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.070077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.070187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.070219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.070404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.070436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.632 qpair failed and we were unable to recover it. 00:28:28.632 [2024-10-14 16:53:33.070694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.632 [2024-10-14 16:53:33.070728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.070946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.070978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.071087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.071112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.071226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.071248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.071379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.071400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.071640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.071663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.071769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.071790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.071960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.071981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.072977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.072998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.073166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.073188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.073272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.073292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.073393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.073415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.073588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.073616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.073798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.073820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.074106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.074231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.074335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.074528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.074712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.074843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.074995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.075921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.075942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.076100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.076122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.076226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.076247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.076362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.076384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.076562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.076583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.076767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.076789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.076879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.633 [2024-10-14 16:53:33.076900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.633 qpair failed and we were unable to recover it. 00:28:28.633 [2024-10-14 16:53:33.077062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.077083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.077253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.077273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.077427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.077449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.077533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.077554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.077778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.077801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.077879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.077900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.077986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.078007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.078182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.078203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.078373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.078394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.078569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.078590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.078772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.078794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.079013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.079034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.079256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.079278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.079473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.079494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.079731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.079753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.079903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.079924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.080026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.080048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.080219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.080239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.080337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.080358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.080459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.080494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.080713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.080747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.080949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.080981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.081092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.081122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.081232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.081264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.081481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.081512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.081707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.081731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.081813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.081833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.081986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.082007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.082100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.082121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.082312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.082333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.082499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.082521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.082634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.082656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.082750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.082774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.083049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.083070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.083152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.083172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.083292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.083314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.083467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.083488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.634 [2024-10-14 16:53:33.083663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.634 [2024-10-14 16:53:33.083685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.634 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.083863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.083892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.084062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.084084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.084254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.084275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.084380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.084401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.084504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.084526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.084676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.084699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.084863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.084884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.085077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.085099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.085324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.085345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.085521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.085542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.085709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.085732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.085824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.085844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.085937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.085958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.086127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.086147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.086227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.086247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.086436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.086457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.086552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.086574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.086729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.086751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.086906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.086927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.087780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.087801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.088002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.088023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.088118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.088139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.088321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.088341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.088574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.088595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.088767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.088788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.088945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.088967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.089064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.089085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.089260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.089281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.635 [2024-10-14 16:53:33.089529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.635 [2024-10-14 16:53:33.089553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.635 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.089669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.089692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.089802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.089823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.089977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.089999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.090173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.090193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.090286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.090307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.090512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.090532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.090648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.090670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.090784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.090804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.090889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.090909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.091153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.091174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.091276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.091297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.091407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.091428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.091579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.091606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.091709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.091730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.091884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.091904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.092092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.092223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.092423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.092550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.092688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.092790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.092986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.093170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.093346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.093450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.093626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.093756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.093887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.093908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.094017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.094038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.094139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.094160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.094255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.094277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.094428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.094450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.636 qpair failed and we were unable to recover it. 00:28:28.636 [2024-10-14 16:53:33.094622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.636 [2024-10-14 16:53:33.094644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.094751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.094773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.094985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.095007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.095170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.095191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.095369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.095391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.095636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.095658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.095755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.095776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.095929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.095951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.096112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.096134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.096283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.096305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.096531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.096552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.096655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.096678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.096907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.096928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.097027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.097047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.097208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.097229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.097380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.097401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.097501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.097523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.097752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.097774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.097890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.097912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.098135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.098157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.098323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.098344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.098501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.098522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.098686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.098708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.098874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.098895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.099051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.099072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.099239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.099260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.099446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.099466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.099630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.099652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.099750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.099771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.099875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.099896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.100063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.100084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.100164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.100187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.100347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.100368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.100520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.100542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.100709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.100736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.100840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.100862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.101039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.101061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.101234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.101255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.101358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.101379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.637 [2024-10-14 16:53:33.101590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.637 [2024-10-14 16:53:33.101619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.637 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.101797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.101819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.101981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.102004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.102196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.102219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.102315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.102336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.102534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.102555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.102707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.102730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.102896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.102918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.103853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.103873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.104117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.104139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.104291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.104313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.104464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.104485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.104588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.104614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.104769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.104790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.104949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.104970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.105071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.105092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.105324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.105346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.105513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.105534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.105630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.105652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.105872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.105905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.106128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.106151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.106248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.106270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.106432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.106453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.106636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.106659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.106854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.106875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.107136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.107159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.107251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.107272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.107443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.107465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.107566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.107586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.107776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.107803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.107912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.638 [2024-10-14 16:53:33.107937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.638 qpair failed and we were unable to recover it. 00:28:28.638 [2024-10-14 16:53:33.108028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.108152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.108325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.108449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.108576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.108804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.108923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.108944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.109101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.109122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.109284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.109305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.109525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.109547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.109725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.109747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.109854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.109875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.110900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.110921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.111875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.111896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.112025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.112154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.112260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.112519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.112708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.112828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.112982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.113159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.113274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.113485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.113594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.113767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.113886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.639 [2024-10-14 16:53:33.113912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.639 qpair failed and we were unable to recover it. 00:28:28.639 [2024-10-14 16:53:33.114005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.640 [2024-10-14 16:53:33.114138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.114967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.114988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.115156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.115178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.115267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.115288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.115372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.115394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.115557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.115577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.115774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.115801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.115915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.115937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.116931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.116953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.117946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.117968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.118133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.118155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.118380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.118401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.118633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.118656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.118756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.118777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.118933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.118954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.119102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.119123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.119234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.119256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.119417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.119439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.119618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.119641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.119751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.640 [2024-10-14 16:53:33.119772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.640 qpair failed and we were unable to recover it. 00:28:28.640 [2024-10-14 16:53:33.119866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.119887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.119977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.119998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.120975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.120995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.121087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.121108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.121262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.121283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.121391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.121413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.121507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.121528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.121773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.121800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.121952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.121974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.122091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.122113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.122282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.122303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.122392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.122413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.122631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.122654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.122823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.122845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.123889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.123910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.124001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.124023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.124186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.124207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.641 [2024-10-14 16:53:33.124361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.641 [2024-10-14 16:53:33.124383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.641 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.124491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.124513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.124684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.124707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.124813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.124835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.124997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.125020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.125126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.125148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.125395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.125417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.125570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.125591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.125702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.125725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.125946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.125969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.126144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.126166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.126268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.126289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.126372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.126394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.126559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.126581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.126811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.126883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.127065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.127135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.127454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.127492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.127595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.127625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.127795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.127817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.127976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.127998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.128940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.128960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.129044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.129065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.129163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.129183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.129379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.129400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.129560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.129581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.129715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.129754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.129963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.129996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.130177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.130209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.130439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.130462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.130624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.130646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.130770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.130791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.130951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.130973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.642 [2024-10-14 16:53:33.131166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.642 [2024-10-14 16:53:33.131188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.642 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.131281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.131303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.131543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.131564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.131732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.131754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.131918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.131940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.132037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.132058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.132159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.132182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.132343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.132364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.132531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.132553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.132642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.132664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.132908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.132928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.133077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.133098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.133270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.133291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.133541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.133585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.133732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.133766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.133959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.133993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.134166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.134198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.134391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.134422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.134619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.134652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.134761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.134785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.134932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.134954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.135927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.135947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.136945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.136966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.137051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.137072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.137162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.137184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.137277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.137298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.137446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.643 [2024-10-14 16:53:33.137466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.643 qpair failed and we were unable to recover it. 00:28:28.643 [2024-10-14 16:53:33.137631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.137654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.137759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.137780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.137956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.137977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.138125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.138145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.138316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.138338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.138505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.138527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.138630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.138653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.138877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.138898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.139121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.139143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.139300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.139321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.139416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.139438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.139652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.139674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.139917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.139938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.140112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.140134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.140303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.140340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.140525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.140557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.140815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.140848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.140968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.141000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.141132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.141162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.141355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.141388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.141660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.141684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.141856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.141878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.142092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.142113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.142279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.142300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.142531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.142552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.142795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.142818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.142913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.142934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.143110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.143132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.143232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.143253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.143428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.143449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.143633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.143656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.143747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.143768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.144000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.144023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.144264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.144286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.144470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.144491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.144615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.144638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.144749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.144771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.144988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.644 [2024-10-14 16:53:33.145008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.644 qpair failed and we were unable to recover it. 00:28:28.644 [2024-10-14 16:53:33.145178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.145200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.145288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.145309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.145475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.145497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.145649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.145672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.145772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.145794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.145894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.145915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.146168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.146357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.146472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.146597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.146708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.146822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.146982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.147003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.147098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.147119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.147275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.147297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.147381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.147401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.147546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.147571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.147751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.147773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.148027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.148048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.148222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.148243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.148408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.148430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.148711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.148734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.148842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.148863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.149931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.149951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.150220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.150242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.150437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.150461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.150566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.150588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.150755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.150777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.150889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.150911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.151066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.645 [2024-10-14 16:53:33.151097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.645 qpair failed and we were unable to recover it. 00:28:28.645 [2024-10-14 16:53:33.151262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.151284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.151452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.151473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.151631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.151652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.151830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.151851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.151951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.151972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.152237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.152258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.152506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.152527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.152709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.152735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.152829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.152850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.153017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.153038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.153202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.153224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.153308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.153328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.153491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.153512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.153667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.153689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.153840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.153874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.154099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.154120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.154233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.154255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.154430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.154452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.154550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.154572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.154737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.154759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.154941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.154968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.155161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.155188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.155208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.646 [2024-10-14 16:53:33.155232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.646 [2024-10-14 16:53:33.155238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.646 [2024-10-14 16:53:33.155245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.646 [2024-10-14 16:53:33.155251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.646 [2024-10-14 16:53:33.155340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.155360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.155450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.155471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.155640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.155661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.155900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.155922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.156036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.156220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.156329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.156521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.156765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:28.646 [2024-10-14 16:53:33.156927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.156953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.156905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:28.646 [2024-10-14 16:53:33.156989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:28.646 [2024-10-14 16:53:33.156990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:28.646 [2024-10-14 16:53:33.157100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.157121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.646 [2024-10-14 16:53:33.157337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.646 [2024-10-14 16:53:33.157359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.646 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.157576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.157597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.157701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.157723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.157875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.157896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.158087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.158216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.158401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.158590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.158712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.158910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.158995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.159016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.159185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.159208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.159385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.159407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.159628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.159651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.159765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.159786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.159902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.159924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.160922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.160956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.161133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.161155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.161253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.161275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.161384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.161406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.161556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.161577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.161787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.161810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.161915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.161937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.162168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.162190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.162282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.162304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.162523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.162546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.162644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.162666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.162829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.162852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.163120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.163142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.163237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.163259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.163412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.163433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.163609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.647 [2024-10-14 16:53:33.163640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.647 qpair failed and we were unable to recover it. 00:28:28.647 [2024-10-14 16:53:33.163812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.163833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.164023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.164044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.164134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.164157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.164246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.164267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.164497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.164519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.164622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.164644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.164819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.164841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.165103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.165124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.165278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.165300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.165455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.165478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.165631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.165654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.165822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.165844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.166026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.166047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.166147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.166180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.166444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.166466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.166620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.166642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.166816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.166837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.166941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.166962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.167977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.167999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.168090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.168112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.168255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.168303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.168428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.168465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.168734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.168769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.169010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.169041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.169181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.169214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.169396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.169428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.169670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.169704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.169827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.169859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.170056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.170087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.170194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.170227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.170399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.170428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.170619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.648 [2024-10-14 16:53:33.170651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.648 qpair failed and we were unable to recover it. 00:28:28.648 [2024-10-14 16:53:33.170900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.170929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.171148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.171176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.171434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.171456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.171556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.171577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.171748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.171771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.171945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.171968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.172077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.172099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.172203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.172224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.172331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.172353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.172552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.172574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.172743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.172768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.173016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.173039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.173140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.173162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.173357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.173379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.173547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.173569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.173825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.173852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.174021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.174045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.174222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.174246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.174434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.174458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.174644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.174669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.174925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.174949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.175066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.175088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.175251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.175276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.175446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.175469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.175570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.175592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.175766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.175791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.175944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.175979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.176186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.176210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.176474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.176531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.176687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.176721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.177001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.177037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.177253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.177287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.177427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.177460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.177582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.177625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.177800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.177825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.178045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.178068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.178170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.178191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.178295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.178318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.649 qpair failed and we were unable to recover it. 00:28:28.649 [2024-10-14 16:53:33.178560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.649 [2024-10-14 16:53:33.178583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.178777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.178800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.178896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.178917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.179178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.179199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.179368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.179390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.179537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.179560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.179800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.179822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.179977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.179999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.180223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.180245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.180419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.180440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.180664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.180688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.180855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.180878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.181047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.181070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.181252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.181276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.181428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.181451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.181613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.181635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.181754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.181777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.181955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.181979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.182899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.182920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.183009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.183030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.183191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.183214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.183317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.183340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.183547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.183570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.183753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.650 [2024-10-14 16:53:33.183775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.650 qpair failed and we were unable to recover it. 00:28:28.650 [2024-10-14 16:53:33.184007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.184034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.184198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.184221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.184385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.184407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.184496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.184516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.184745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.184768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.185015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.185036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.185147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.185168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.185332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.185353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.185522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.185543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.185769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.185791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.185958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.185979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.186144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.186165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.186386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.186407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.186673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.186696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.186910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.186931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.187165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.187186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.187449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.187470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.187655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.187706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.187931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.187953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.188051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.188072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.188221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.188242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.188497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.188518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.188773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.188795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.188964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.188986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.189222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.189244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.189475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.189496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.189679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.189702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.189877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.189898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.190140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.190161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.190321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.190342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.190542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.190564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.190752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.190774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.190942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.190962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.191047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.191066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.191163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.191185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.191327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.651 [2024-10-14 16:53:33.191348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.651 qpair failed and we were unable to recover it. 00:28:28.651 [2024-10-14 16:53:33.191450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.191472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.191641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.191664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.191930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.191952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.192169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.192192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.192357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.192382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.192545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.192566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.192678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.192700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.192915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.192936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.193114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.193134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.193295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.193315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.193462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.193483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.193713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.193736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.193970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.193990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.194207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.194228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.194490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.194510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.194669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.194691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.194866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.194887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.195038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.195059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.195230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.195252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.195424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.195445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.195685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.195707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.195973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.195993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.196188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.196209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.196404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.196425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.196584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.196611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.196854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.196877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.197128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.197149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.197316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.197338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.197558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.197579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.197799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.197822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.198067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.198089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.198259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.198280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.198463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.198485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.198733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.198755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.198919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.198939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.199179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.652 [2024-10-14 16:53:33.199200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.652 qpair failed and we were unable to recover it. 00:28:28.652 [2024-10-14 16:53:33.199383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.199404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.199583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.199619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.199876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.199898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.200165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.200186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.200427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.200450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.200648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.200671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.200782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.200803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.201027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.201049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.201255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.201282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.201546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.201567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.201808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.201831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.201998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.202019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.202251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.202273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.202454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.202477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.202667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.202690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.202857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.202879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.203040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.203063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.203204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.203226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.203442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.203463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.203661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.203683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.203916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.203938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.204087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.204109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.204363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.204385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.204496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.204516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.204697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.204719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.204836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.204858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.205107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.205129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.205344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.205366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.205555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.205577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.205839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.205861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.206126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.206147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.206308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.206329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.206565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.206586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.206748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.206769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.207011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.207032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.207141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.207162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.207406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.207427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.207594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.207624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.207776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.207798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.207898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.207918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.208154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.208175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.653 qpair failed and we were unable to recover it. 00:28:28.653 [2024-10-14 16:53:33.208389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.653 [2024-10-14 16:53:33.208410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.208662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.208686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.208928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.208950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.209172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.209194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.209410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.209432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.209654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.209676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.209892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.209913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.210059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.210084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.210325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.210346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.210594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.210623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.210846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.210867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.211028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.211051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.211154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.211175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.211413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.211435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.211682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.211705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.211874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.211895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.212068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.212089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.212315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.212336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.212607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.212629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.212789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.212810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.212974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.212996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.213181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.213203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.213459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.213480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.213642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.213664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.213881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.213901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.214123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.214145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.214304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.214325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.214557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.214577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.214853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.214874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.215042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.215064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.215278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.215299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.215464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.215485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.215653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.215676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.215847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.215868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.216030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.216051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.654 qpair failed and we were unable to recover it. 00:28:28.654 [2024-10-14 16:53:33.216286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.654 [2024-10-14 16:53:33.216307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.216551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.216572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.216671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.216691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.216874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.216895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.217141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.217162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.217400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.217421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.217653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.217676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.217848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.217869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.218133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.218154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.218418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.218438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.218676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.218698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.218893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.218914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.219072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.219096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.655 [2024-10-14 16:53:33.219353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.655 [2024-10-14 16:53:33.219374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.655 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.219520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.219542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.219726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.219749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.219980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.220002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.220168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.220189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.220464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.220485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.220652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.220674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.220923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.220944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.221162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.221182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.221366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.221386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.221494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.221516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.221783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.221805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.221989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.222010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.222178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.222199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.222433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.222454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.222695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.222717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.222878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.222898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.223140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.223162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.925 [2024-10-14 16:53:33.223417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.925 [2024-10-14 16:53:33.223437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.925 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.223645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.223667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.223828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.223848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.224034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.224056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.224151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.224171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.224407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.224428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.224665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.224687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.224854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.224875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.224966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.224986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.225099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.225119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.225287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.225308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.225416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.225436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.225712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.225733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.225907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.225928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.226191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.226212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.226329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.226349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.226565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.226586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.226820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.226842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.227010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.227031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.227260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.227280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.227523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.227544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.227783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.227809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.227977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.227998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.228251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.228271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.228445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.228465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.228584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.228611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.228782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.228804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.229009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.229029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.229137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.229158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.229377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.229397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.229633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.229654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.229762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.229782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.230029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.230050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.230313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.926 [2024-10-14 16:53:33.230333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.926 qpair failed and we were unable to recover it. 00:28:28.926 [2024-10-14 16:53:33.230498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.230519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.230679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.230701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.230870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.230891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.231107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.231127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.231375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.231396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.231485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.231505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.231754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.231776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.231939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.231959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.232131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.232152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.232390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.232411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.232589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.232615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.232774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.232795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.232881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.232901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.233121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.233142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.233288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.233340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6dc60 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.233667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.233718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.233915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.233946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7124000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.234194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.234218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.234464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.234485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.234655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.234677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.234923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.234944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.235260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.235282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.235395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.235416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.235579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.235606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.235795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.235816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.236075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.236096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.236264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.236285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.236529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.236553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.236723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.236744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.236965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.236986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.237178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.237199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.237364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.237385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.237640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.237662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.237849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.237870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.238096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.238117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.238352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.238372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.238533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.238554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.927 qpair failed and we were unable to recover it. 00:28:28.927 [2024-10-14 16:53:33.238787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.927 [2024-10-14 16:53:33.238809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.239074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.239095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.239195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.239215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.239470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.239491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.239735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.239757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.239856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.239877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.240026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.240046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.240311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.240332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.240436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.240457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.240676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.240698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.240941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.240962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.241112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.241133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.241349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.241370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.241544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.241564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.241820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.241843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.242030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.242051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.242288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.242308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.242472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.242494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.242737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.242758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.242976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.242997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.243242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.243263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.243432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.243453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.243672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.243694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.243846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.243866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.244051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.244072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.244297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.244318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.244583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.244608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.244832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.244853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.245109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.245130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.245373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.245394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.245554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.245579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.245808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.245830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.245945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.245965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.246148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.246168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.246314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.246334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.246485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.246505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.246779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.246801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.247037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.247057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.928 [2024-10-14 16:53:33.247293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.928 [2024-10-14 16:53:33.247314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.928 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.247555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.247575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.247731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.247753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.248018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.248038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.248258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.248279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.248439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.248460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.248707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.248729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.248964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.248984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.249198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.249218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.249378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.249398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.249640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.249661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.249825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.249846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.250079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.250100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.250314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.250335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.250551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.250571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.250770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.250791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.250974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.250995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.251232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.251253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.251467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.251487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.251797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.251848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.252108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.252148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.252421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.252461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f712c000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.252746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.252769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.253035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.253056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.253297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.253317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.253465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.253485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.253712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.253734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.253980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.254001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.254240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.254260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.254371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.254392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.254560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.254580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.254702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.254723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.254899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.254923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.255162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.255183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.255295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.255316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.255555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.255575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.255821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.255844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.929 [2024-10-14 16:53:33.256075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.929 [2024-10-14 16:53:33.256096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.929 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.256336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.256357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.256596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.256624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.256853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.256875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.256970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.256992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.257159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.257179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.257380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.257400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.257572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.257592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.257814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.257834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.258001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.258022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.258281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.258301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.258457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.258478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.258752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.258774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.258946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.258967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.259131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.259152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:28.930 [2024-10-14 16:53:33.259375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.259398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.259567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.259589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:28.930 [2024-10-14 16:53:33.259806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.259827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.259931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.259952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:28.930 [2024-10-14 16:53:33.260198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.260221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:28.930 [2024-10-14 16:53:33.260394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.260420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.930 [2024-10-14 16:53:33.260646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.260671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.260924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.260945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.261137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.261158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.261325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.261346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.261516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.261536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.261772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.261795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.261979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.930 [2024-10-14 16:53:33.262000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.930 qpair failed and we were unable to recover it. 00:28:28.930 [2024-10-14 16:53:33.262172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.262192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.262395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.262417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.262620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.262642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.262771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.262792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.263056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.263077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.263256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.263282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.263444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.263466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.263657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.263679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.263893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.263915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.264150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.264172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.264388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.264409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.264658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.264680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.264896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.264917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.265160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.265181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.265368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.265389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.265585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.265614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.265712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.265733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.265884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.265904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.266081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.266102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.266268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.266290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.266435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.266456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.266546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.266565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.266757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.266779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.266937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.266959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.267106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.267129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.267242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.267264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.267371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.267391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.267613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.267634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.267795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.267816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.267986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.268007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.268112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.268133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.268224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.268244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.268358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.268379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.268617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.268639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.268821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.268842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.269017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.931 [2024-10-14 16:53:33.269037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.931 qpair failed and we were unable to recover it. 00:28:28.931 [2024-10-14 16:53:33.269119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.269139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.269387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.269408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.269613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.269634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.269818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.269839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.269939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.269960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.270909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.270928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.271973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.271995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.272153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.272174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.272274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.272297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.272390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.272411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.272624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.272646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.272805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.272827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.272970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.272990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.273080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.273099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.273264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.273284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.273503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.273524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.273626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.273647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.273735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.273755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.273853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.273874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.274066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.274248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.274351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.274530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.274701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.932 [2024-10-14 16:53:33.274829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.932 qpair failed and we were unable to recover it. 00:28:28.932 [2024-10-14 16:53:33.274971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.274991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.275097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.275274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.275396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.275576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.275774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.275890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.275991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.276930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.276950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.277156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.277177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.277356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.277377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.277552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.277572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.277738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.277759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.277844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.277865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.278071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.278091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.278251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.278272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.278461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.278482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.278662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.278684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.278902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.278923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.279078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.279099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.279301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.279323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.279556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.279578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.279748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.279770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.279923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.279945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.280118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.280139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.280311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.280331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.280449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.280468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.280631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.280653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.280788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.280810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.281015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.281037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.933 [2024-10-14 16:53:33.281233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.933 [2024-10-14 16:53:33.281255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.933 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.281498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.281519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.281766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.281787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.281969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.281992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.282098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.282118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.282379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.282400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.282508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.282528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.282691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.282713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.282865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.282885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.283051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.283071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.283231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.283253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.283467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.283488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.283590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.283619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.283780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.283801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.283914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.283934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.284032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.284053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.284160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.284199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.284368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.284388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.284516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.284537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.284730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.284753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.284880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.284901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.285018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.285038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.285203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.285224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.285415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.285435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.285609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.285630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.285786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.285807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.285978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.285999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.286105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.286127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.286372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.286392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.286581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.286608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.286813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.286835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.287097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.287118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.287393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.287415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.287659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.287682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.287786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.287806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.934 qpair failed and we were unable to recover it. 00:28:28.934 [2024-10-14 16:53:33.287970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.934 [2024-10-14 16:53:33.287991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.288088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.288109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.288291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.288312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.288398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.288419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.288568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.288589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.288719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.288741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.288848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.288869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.289034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.289056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.289232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.289253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.289356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.289376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.289489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.289510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.289610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.289631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.289809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.289829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.290845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.290866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.291891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.291912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.292014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.292035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.292121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.292142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.292233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.292254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.292486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.292507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.292736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.292758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.292958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.292979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.293084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.935 [2024-10-14 16:53:33.293105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.935 qpair failed and we were unable to recover it. 00:28:28.935 [2024-10-14 16:53:33.293191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.293312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.293422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.293540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.293733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.293853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.293958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.293980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.294148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.294169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b9 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.936 0 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.294290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.294310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.294468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.294491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.936 [2024-10-14 16:53:33.294668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.294690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.294840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.294862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.936 [2024-10-14 16:53:33.295015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.295136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.936 [2024-10-14 16:53:33.295325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.295456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.295645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.295815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.295934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.295955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.296049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.296071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.296276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.296297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.296471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.296492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.296686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.296707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.296803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.296823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.296932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.296953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.297069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.297089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.297214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.297234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.297397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.297418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.297680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.297702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.297856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.297877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.298123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.298143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.298434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.298454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.298734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.298756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.936 [2024-10-14 16:53:33.298916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.936 [2024-10-14 16:53:33.298937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.936 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.299104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.299124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.299390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.299411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.299563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.299583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.299767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.299788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.299898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.299919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.300084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.300105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.300394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.300414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.300578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.300598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.300801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.300821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.300990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.301182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.301308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.301431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.301612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.301795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.301917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.301937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.302062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.302082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.302371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.302391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.302581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.302616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.302767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.302788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.302909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.302929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.303077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.303097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.303278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.303299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.303446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.303466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.303654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.303675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.303801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.303822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.303986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.304007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.304165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.304186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.304447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.304467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.304649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.304670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.304913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.304933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.305106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.305127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.305352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.305374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.305535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.305555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.305769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.305790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.305974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.305996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.306115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.306135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.306306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.306327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.937 qpair failed and we were unable to recover it. 00:28:28.937 [2024-10-14 16:53:33.306487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.937 [2024-10-14 16:53:33.306507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.306752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.306774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.306871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.306891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.307061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.307081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.307207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.307228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.307411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.307431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.307688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.307710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.307884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.307905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.308071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.308092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.308279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.308300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.308482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.308503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.308608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.308629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.308707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.308726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.308844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.308865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.309051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.309071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.309294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.309315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.309500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.309520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.309688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.309710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.309899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.309920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.310020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.310041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.310232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.310256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.310368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.310389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.310543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.310564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.310796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.310817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.310984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.311004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.311256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.311277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.311446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.311467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.311630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.311651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.311820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.311840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.311944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.311964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.312125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.312146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.312329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.312350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.312509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.312529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.312682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.312703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.312828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.312849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.313023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.313043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.313321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.313342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.313451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.313471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.313727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.313750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.938 qpair failed and we were unable to recover it. 00:28:28.938 [2024-10-14 16:53:33.313968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.938 [2024-10-14 16:53:33.313988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.314253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.314275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.314435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.314455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.314727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.314747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.314975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.314996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.315096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.315117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.315407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.315429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.315609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.315631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.315753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.315775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.315925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.315945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.316106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.316126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.316234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.316255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.316464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.316485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.316588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.316617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.316876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.316897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.317063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.317084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.317313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.317334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.317548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.317568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.317755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.317777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.317882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.317903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.317999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.318018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.318184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.318208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.318427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.318448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.318665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.318687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.318773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.318793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.319010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.319031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.319277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.319298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.319463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.319484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.939 qpair failed and we were unable to recover it. 00:28:28.939 [2024-10-14 16:53:33.319743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.939 [2024-10-14 16:53:33.319765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.319927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.319948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.320094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.320114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.320312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.320333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.320499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.320520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.320773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.320795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.320967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.320987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.321183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.321204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.321441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.321463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.321614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.321636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.321790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.321811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.321971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.321992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.322152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.322173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.322413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.322434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.322595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.322624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.322791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.322812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.322973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.322994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.323213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.323235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.323345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.323367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.323595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.323624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.323784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.323806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.323997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.324018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.324186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.324208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.324389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.324411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.324589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.324630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.324799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.324820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.324990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.325012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.325181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.325202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.325430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.325452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.325619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.325643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.325858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.325880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.325976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.325998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.326177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.326198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.326347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.326373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.326537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.326560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.326742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.326766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.327010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.327031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.327200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.327221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.327398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.327421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.940 [2024-10-14 16:53:33.327576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.940 [2024-10-14 16:53:33.327596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.940 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.327814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.327836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.328025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.328046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.328141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.328162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.328352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.328373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.328541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.328562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.328815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.328837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.328954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.328975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.329197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.329219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.329369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.329390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.329670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.329692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.329929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.329950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.330185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.330205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 Malloc0 00:28:28.941 [2024-10-14 16:53:33.330394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.330415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.330659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.330681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.330909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.330930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.941 [2024-10-14 16:53:33.331087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.331108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.331266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.331287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:28.941 [2024-10-14 16:53:33.331550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.331572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.331847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.941 [2024-10-14 16:53:33.331870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.332041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.332062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.941 [2024-10-14 16:53:33.332317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.332339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.332553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.332574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.332802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.332825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.333012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.333033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.333254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.333275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.333438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.333460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.333642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.333664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.333903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.333924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.334109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.334130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.334369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.334390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.334535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.334556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.334713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.334734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.334905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.941 [2024-10-14 16:53:33.334926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.941 qpair failed and we were unable to recover it. 00:28:28.941 [2024-10-14 16:53:33.335094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.335114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.335359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.335380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.335621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.335643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.335871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.335891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.336158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.336178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.336401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.336422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.336574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.336594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.336769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.336790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.337061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.337081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.337297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.337318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.337561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.337581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.337735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.337756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.337987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.942 [2024-10-14 16:53:33.338028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.338050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.338141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.338162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.338355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.338376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.338536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.338557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.338716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.338737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.338836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.338857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.339093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.339113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.339365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.339385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.339625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.339646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.339817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.339838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.340019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.340039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.340142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.340163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.340378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.340399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.340620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.340641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.340910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.340930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.341167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.341188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.341447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.341468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.341616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.341638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.341923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.341944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.342159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.342179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.342333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.342354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.342568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.342589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.342766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.342787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.942 qpair failed and we were unable to recover it. 00:28:28.942 [2024-10-14 16:53:33.343025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.942 [2024-10-14 16:53:33.343045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.343234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.343254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.343498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.343519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.343688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.343714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.343907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.343928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.344108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.344128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.344287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.344307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.344473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.344493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.344668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.344690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.344842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.344863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.345031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.345051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.345285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.345305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.345576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.345596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.345696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.345717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.345880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.345901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.346164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.346184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.346422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.346443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.346608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.346629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.943 [2024-10-14 16:53:33.346891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.346913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.347137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.347158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.943 [2024-10-14 16:53:33.347397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.347418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.943 [2024-10-14 16:53:33.347631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.347653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.347894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.347915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.348153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.348174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.348437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.348458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.348557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.348578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.348814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.348836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.348998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.349020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.349306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.349327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.349491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.349512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.349769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.349792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.349961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.349982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.350166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.350187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.350403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.350423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.350610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.350631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.350805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.350825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.943 qpair failed and we were unable to recover it. 00:28:28.943 [2024-10-14 16:53:33.351050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.943 [2024-10-14 16:53:33.351071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.351231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.351252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.351440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.351460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.351722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.351744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.351966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.351987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.352176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.352201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.352417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.352438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.352606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.352626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.352794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.352815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.352917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.352936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.353090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.353111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.353210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.353231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.353379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.353399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.353499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.353519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.353628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.353649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.353801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.353821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.354014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.354036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.354260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.354281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.354453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.354473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.354717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.354741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.944 [2024-10-14 16:53:33.354993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.355014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.355234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.355256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.944 [2024-10-14 16:53:33.355469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.355491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.944 [2024-10-14 16:53:33.355730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.355752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.944 [2024-10-14 16:53:33.355969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.355991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.356143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.356164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.356330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.356350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.356613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.356635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.356796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.356818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.357043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.944 [2024-10-14 16:53:33.357065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.944 qpair failed and we were unable to recover it. 00:28:28.944 [2024-10-14 16:53:33.357300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.357325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.357490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.357511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.357773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.357794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.357983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.358004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.358218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.358239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.358394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.358414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.358633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.358654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.358828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.358849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.359014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.359035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.359259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.359279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.359516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.359537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.359709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.359732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.360015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.360036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.360226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.360248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.360498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.360520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.360781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.360803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.361008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.361029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.361193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.361215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.361456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.361477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.361696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.361718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.361910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.361930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.362159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.362179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.362397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.945 [2024-10-14 16:53:33.362419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.362683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.945 [2024-10-14 16:53:33.362704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.362945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.362966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.945 [2024-10-14 16:53:33.363181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.363206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.945 [2024-10-14 16:53:33.363452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.363473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.363656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.363678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.363945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.363967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.364188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.364209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.364458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.364480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.364648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.364669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.364896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.364917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.365155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.365176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.945 [2024-10-14 16:53:33.365341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.945 [2024-10-14 16:53:33.365363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.945 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.365537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.946 [2024-10-14 16:53:33.365557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.365779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.946 [2024-10-14 16:53:33.365801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.366017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.946 [2024-10-14 16:53:33.366038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7120000b90 with addr=10.0.0.2, port=4420 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.366227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.946 [2024-10-14 16:53:33.368643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.368744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.368779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.368794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.368809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.368846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.946 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.946 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.946 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.946 [2024-10-14 16:53:33.378563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.946 [2024-10-14 16:53:33.378641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.378660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.378670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.378679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.378701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 16:53:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 699449 00:28:28.946 [2024-10-14 16:53:33.388538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.388597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.388614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.388621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.388627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.388641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.398583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.398661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.398674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.398684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.398690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.398705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.408553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.408634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.408648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.408655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.408661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.408676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.418563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.418620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.418634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.418641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.418647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.418662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.428607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.428661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.428675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.428682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.428688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.428703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.438642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.438701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.438714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.438721] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.438727] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.438742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.448673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.448725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.448739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.448746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.448752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.448766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.458699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.458753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.458766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.946 [2024-10-14 16:53:33.458772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.946 [2024-10-14 16:53:33.458778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.946 [2024-10-14 16:53:33.458792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.946 qpair failed and we were unable to recover it. 00:28:28.946 [2024-10-14 16:53:33.468765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.946 [2024-10-14 16:53:33.468831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.946 [2024-10-14 16:53:33.468845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.468851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.468857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.468871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.478750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.478808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.478820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.478827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.478833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.478847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.488763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.488818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.488831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.488841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.488847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.488861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.498767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.498820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.498833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.498840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.498846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.498860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.508749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.508857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.508869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.508876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.508881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.508895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.518824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.518879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.518891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.518897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.518903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.518917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.528880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.528936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.528949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.528956] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.528962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.528976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.538967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.539020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.539033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.539039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.539045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.539059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:28.947 [2024-10-14 16:53:33.548947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:28.947 [2024-10-14 16:53:33.548996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:28.947 [2024-10-14 16:53:33.549010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:28.947 [2024-10-14 16:53:33.549016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:28.947 [2024-10-14 16:53:33.549023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:28.947 [2024-10-14 16:53:33.549037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:28.947 qpair failed and we were unable to recover it. 00:28:29.207 [2024-10-14 16:53:33.558976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.207 [2024-10-14 16:53:33.559032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.207 [2024-10-14 16:53:33.559045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.207 [2024-10-14 16:53:33.559052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.207 [2024-10-14 16:53:33.559058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.207 [2024-10-14 16:53:33.559073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.207 qpair failed and we were unable to recover it. 00:28:29.207 [2024-10-14 16:53:33.569004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.207 [2024-10-14 16:53:33.569055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.569068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.569074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.569080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.569094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.579014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.579065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.579081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.579088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.579094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.579108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.589046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.589099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.589111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.589118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.589124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.589138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.599090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.599145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.599158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.599165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.599171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.599185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.609100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.609158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.609170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.609177] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.609183] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.609197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.619179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.619239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.619251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.619258] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.619263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.619280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.629172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.629225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.629238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.629244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.629250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.629264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.639205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.639260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.639273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.639279] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.639285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.639299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.649230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.649298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.649311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.649317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.649323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.649337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.659260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.659316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.659328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.659335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.659341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.659355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.669272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.669317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.669333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.669339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.669345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.669359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.679316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.679375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.679387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.679394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.679400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.679414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.689342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.689396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.689409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.689416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.689422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.689437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.208 [2024-10-14 16:53:33.699382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.208 [2024-10-14 16:53:33.699443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.208 [2024-10-14 16:53:33.699455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.208 [2024-10-14 16:53:33.699462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.208 [2024-10-14 16:53:33.699468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.208 [2024-10-14 16:53:33.699482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.208 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.709416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.709505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.709517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.709524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.709530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.709546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.719450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.719509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.719522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.719528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.719534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.719548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.729474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.729530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.729543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.729550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.729556] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.729570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.739489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.739542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.739554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.739561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.739567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.739581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.749514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.749566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.749579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.749586] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.749592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.749608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.759587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.759661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.759677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.759683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.759689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.759703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.769576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.769636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.769650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.769657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.769662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.769677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.779595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.779649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.779661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.779668] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.779673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.779687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.789632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.789686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.789698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.789705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.789711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.789725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.799672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.799735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.799747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.799754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.799763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.799778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.809700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.809754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.809766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.809773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.809778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.809792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.819721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.819775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.819787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.819794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.819800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.819815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.829740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.829795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.829808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.829815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.829821] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.829834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.209 [2024-10-14 16:53:33.839806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.209 [2024-10-14 16:53:33.839865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.209 [2024-10-14 16:53:33.839878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.209 [2024-10-14 16:53:33.839885] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.209 [2024-10-14 16:53:33.839891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.209 [2024-10-14 16:53:33.839905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.209 qpair failed and we were unable to recover it. 00:28:29.470 [2024-10-14 16:53:33.849797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.470 [2024-10-14 16:53:33.849858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.470 [2024-10-14 16:53:33.849871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.470 [2024-10-14 16:53:33.849878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.470 [2024-10-14 16:53:33.849883] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.470 [2024-10-14 16:53:33.849897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.470 qpair failed and we were unable to recover it. 00:28:29.470 [2024-10-14 16:53:33.859755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.470 [2024-10-14 16:53:33.859810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.470 [2024-10-14 16:53:33.859823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.470 [2024-10-14 16:53:33.859830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.470 [2024-10-14 16:53:33.859836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.859850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.869848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.869944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.869957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.869963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.869969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.869983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.879893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.879950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.879963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.879969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.879975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.879989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.889901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.889956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.889969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.889975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.889984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.889998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.899936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.899989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.900002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.900008] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.900014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.900028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.909951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.910015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.910028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.910035] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.910040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.910054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.919985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.920046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.920058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.920065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.920071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.920085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.930001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.930057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.930070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.930076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.930082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.930096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.939942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.940011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.940024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.940030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.940036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.940051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.950105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.950169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.950182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.950189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.950195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.950209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.960090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.960144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.960156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.960163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.960169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.960182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.970153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.970227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.970239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.970247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.970253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.970266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.980137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.980187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.980201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.980211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.980218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.980233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:33.990087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:33.990184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.471 [2024-10-14 16:53:33.990197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.471 [2024-10-14 16:53:33.990204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.471 [2024-10-14 16:53:33.990210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.471 [2024-10-14 16:53:33.990224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.471 qpair failed and we were unable to recover it. 00:28:29.471 [2024-10-14 16:53:34.000182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.471 [2024-10-14 16:53:34.000281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.000294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.000300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.000306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.000320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.010218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.010268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.010281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.010287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.010293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.010308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.020259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.020308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.020321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.020327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.020333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.020347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.030307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.030366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.030379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.030386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.030392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.030406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.040301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.040365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.040377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.040384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.040390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.040405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.050331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.050387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.050401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.050407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.050413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.050428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.060362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.060416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.060429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.060436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.060442] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.060457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.070422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.070477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.070496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.070503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.070508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.070523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.080426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.080482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.080495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.080502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.080508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.080522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.090440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.090499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.090514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.090523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.090531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.090545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.472 [2024-10-14 16:53:34.100383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.472 [2024-10-14 16:53:34.100440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.472 [2024-10-14 16:53:34.100453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.472 [2024-10-14 16:53:34.100460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.472 [2024-10-14 16:53:34.100465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.472 [2024-10-14 16:53:34.100480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.472 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-14 16:53:34.110466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.732 [2024-10-14 16:53:34.110520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.732 [2024-10-14 16:53:34.110533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.732 [2024-10-14 16:53:34.110540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.732 [2024-10-14 16:53:34.110547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.732 [2024-10-14 16:53:34.110561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.732 qpair failed and we were unable to recover it. 00:28:29.732 [2024-10-14 16:53:34.120461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.120516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.120529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.120537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.120545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.120560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.130642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.130716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.130730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.130736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.130742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.130756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.140622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.140673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.140686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.140692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.140698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.140712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.150545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.150597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.150616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.150623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.150629] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.150643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.160588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.160651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.160668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.160675] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.160681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.160695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.170745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.170810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.170823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.170829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.170835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.170849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.180679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.180734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.180746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.180753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.180759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.180773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.190796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.190868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.190881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.190888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.190893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.190907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.200754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.200812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.200824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.200831] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.200837] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.200854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.210732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.210791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.210804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.210810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.210816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.210830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.220824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.220874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.220887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.220894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.220900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.220914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.230814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.230869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.230883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.230889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.230895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.230909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.240891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.240947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.240959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.240966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.240972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.240986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.250857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.733 [2024-10-14 16:53:34.250909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.733 [2024-10-14 16:53:34.250925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.733 [2024-10-14 16:53:34.250932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.733 [2024-10-14 16:53:34.250937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.733 [2024-10-14 16:53:34.250951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.733 qpair failed and we were unable to recover it. 00:28:29.733 [2024-10-14 16:53:34.260974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.261028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.261041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.261047] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.261053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.261067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.270966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.271017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.271030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.271037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.271043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.271057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.281002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.281058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.281071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.281078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.281084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.281098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.291020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.291077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.291089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.291095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.291104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.291118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.301048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.301103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.301116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.301122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.301128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.301143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.311109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.311164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.311178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.311184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.311190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.311204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.321049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.321113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.321125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.321132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.321137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.321152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.331123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.331175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.331188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.331195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.331200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.331215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.341193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.341279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.341293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.341299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.341305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.341319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.351159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.351225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.351239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.351245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.351251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.351265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.734 [2024-10-14 16:53:34.361267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.734 [2024-10-14 16:53:34.361331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.734 [2024-10-14 16:53:34.361345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.734 [2024-10-14 16:53:34.361351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.734 [2024-10-14 16:53:34.361358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.734 [2024-10-14 16:53:34.361372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.734 qpair failed and we were unable to recover it. 00:28:29.994 [2024-10-14 16:53:34.371295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.994 [2024-10-14 16:53:34.371353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.994 [2024-10-14 16:53:34.371366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.371373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.371380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.371394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.381199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.381254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.381267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.381274] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.381284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.381298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.391261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.391348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.391361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.391367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.391373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.391387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.401367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.401418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.401431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.401437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.401443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.401457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.411302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.411358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.411370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.411377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.411383] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.411397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.421315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.421374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.421391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.421397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.421404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.421418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.431356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.431411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.431424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.431430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.431436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.431450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.441484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.441541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.441554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.441560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.441566] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.441580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.451415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.451479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.451492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.451499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.451504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.451518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.461479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.461527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.461540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.461546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.461552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.461566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.471561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.471624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.471637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.471647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.471653] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.471667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.481548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.481614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.481627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.481634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.481639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.481654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.491626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.491697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.491710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.491716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.491722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.491736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.501623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.501675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.501688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.995 [2024-10-14 16:53:34.501695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.995 [2024-10-14 16:53:34.501700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.995 [2024-10-14 16:53:34.501715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.995 qpair failed and we were unable to recover it. 00:28:29.995 [2024-10-14 16:53:34.511658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.995 [2024-10-14 16:53:34.511711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.995 [2024-10-14 16:53:34.511724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.511730] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.511736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.511750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.521695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.521754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.521766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.521773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.521779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.521793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.531720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.531772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.531785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.531791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.531797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.531811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.541759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.541817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.541830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.541837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.541843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.541857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.551765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.551818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.551831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.551837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.551843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.551857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.561807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.561903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.561916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.561925] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.561931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.561945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.571860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.571907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.571920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.571926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.571932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.571946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.581862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.581915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.581928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.581935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.581940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.581954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.591932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.592023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.592036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.592042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.592047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.592062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.601953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.602007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.602020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.602026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.602032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.602046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.611947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.612004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.612017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.612023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.612029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.612043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:29.996 [2024-10-14 16:53:34.621956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:29.996 [2024-10-14 16:53:34.622012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:29.996 [2024-10-14 16:53:34.622025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:29.996 [2024-10-14 16:53:34.622032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:29.996 [2024-10-14 16:53:34.622038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:29.996 [2024-10-14 16:53:34.622053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:29.996 qpair failed and we were unable to recover it. 00:28:30.256 [2024-10-14 16:53:34.631997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.256 [2024-10-14 16:53:34.632049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.256 [2024-10-14 16:53:34.632062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.256 [2024-10-14 16:53:34.632068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.256 [2024-10-14 16:53:34.632074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.256 [2024-10-14 16:53:34.632089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.256 qpair failed and we were unable to recover it. 00:28:30.256 [2024-10-14 16:53:34.642031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.256 [2024-10-14 16:53:34.642085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.256 [2024-10-14 16:53:34.642098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.256 [2024-10-14 16:53:34.642105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.256 [2024-10-14 16:53:34.642111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.256 [2024-10-14 16:53:34.642124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.256 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.652060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.652110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.652126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.652133] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.652139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.652152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.662101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.662159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.662172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.662179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.662184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.662199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.672101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.672150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.672163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.672169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.672175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.672189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.682119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.682176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.682188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.682195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.682201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.682215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.692166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.692258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.692271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.692277] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.692283] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.692300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.702185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.702234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.702247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.702253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.702259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.702274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.712233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.712282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.712294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.712301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.712306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.712320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.722261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.722342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.722355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.722362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.722367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.722381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.732278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.732325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.732338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.732345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.732351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.732366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.742308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.742361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.742377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.742384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.742389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.742403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.752340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.752394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.752407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.752414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.752420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.752434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.762371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.762425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.762437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.762444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.257 [2024-10-14 16:53:34.762450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.257 [2024-10-14 16:53:34.762463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.257 qpair failed and we were unable to recover it. 00:28:30.257 [2024-10-14 16:53:34.772408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.257 [2024-10-14 16:53:34.772484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.257 [2024-10-14 16:53:34.772497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.257 [2024-10-14 16:53:34.772503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.772509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.772523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.782418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.782490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.782504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.782510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.782516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.782536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.792447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.792500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.792513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.792519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.792525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.792539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.802483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.802540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.802553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.802559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.802565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.802579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.812516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.812570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.812583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.812589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.812595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.812613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.822534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.822583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.822595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.822606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.822612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.822626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.832553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.832613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.832627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.832633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.832639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.832654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.842594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.842657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.842670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.842676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.842682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.842697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.852630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.852689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.852702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.852709] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.852715] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.852729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.862651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.862706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.862719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.862725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.862731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.862745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.872676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.872729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.872742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.872748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.872766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.872780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.258 [2024-10-14 16:53:34.882703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.258 [2024-10-14 16:53:34.882758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.258 [2024-10-14 16:53:34.882771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.258 [2024-10-14 16:53:34.882777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.258 [2024-10-14 16:53:34.882784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.258 [2024-10-14 16:53:34.882798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.258 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.892731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.892784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.892797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.892804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.892810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.892824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.902811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.902866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.902880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.902886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.902892] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.902906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.912782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.912837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.912850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.912856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.912863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.912877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.922826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.922883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.922896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.922902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.922908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.922922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.932841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.932898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.932911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.932918] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.932924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.932939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.942869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.942917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.942929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.942936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.942942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.942957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.952921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.952968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.952981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.952987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.952993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.953007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.962925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.962980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.962993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.963003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.963009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.963022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.972983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.973065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.973078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.973084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.973090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.973104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.982976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.983027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.983040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.518 [2024-10-14 16:53:34.983046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.518 [2024-10-14 16:53:34.983052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.518 [2024-10-14 16:53:34.983066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.518 qpair failed and we were unable to recover it. 00:28:30.518 [2024-10-14 16:53:34.992999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.518 [2024-10-14 16:53:34.993051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.518 [2024-10-14 16:53:34.993064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:34.993070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:34.993076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:34.993091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.003043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.003099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.003122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.003129] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.003135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.003153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.013054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.013111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.013124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.013131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.013137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.013151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.023090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.023146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.023158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.023165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.023171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.023185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.033077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.033136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.033149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.033156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.033162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.033176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.043156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.043217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.043231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.043238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.043244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.043257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.053165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.053246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.053261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.053271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.053277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.053292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.063200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.063255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.063268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.063275] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.063281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.063295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.073228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.073282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.073295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.073301] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.073307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.073322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.083260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.083312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.083325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.083332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.083337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.083351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.093279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.093332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.093345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.093352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.093358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.093372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.103319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.103373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.103386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.103392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.103398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.103412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.113327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.113406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.113419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.113425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.113431] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.113445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.519 [2024-10-14 16:53:35.123376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.519 [2024-10-14 16:53:35.123430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.519 [2024-10-14 16:53:35.123444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.519 [2024-10-14 16:53:35.123450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.519 [2024-10-14 16:53:35.123456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.519 [2024-10-14 16:53:35.123470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.519 qpair failed and we were unable to recover it. 00:28:30.520 [2024-10-14 16:53:35.133404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.520 [2024-10-14 16:53:35.133494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.520 [2024-10-14 16:53:35.133508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.520 [2024-10-14 16:53:35.133514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.520 [2024-10-14 16:53:35.133520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.520 [2024-10-14 16:53:35.133534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.520 qpair failed and we were unable to recover it. 00:28:30.520 [2024-10-14 16:53:35.143502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.520 [2024-10-14 16:53:35.143598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.520 [2024-10-14 16:53:35.143618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.520 [2024-10-14 16:53:35.143625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.520 [2024-10-14 16:53:35.143631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.520 [2024-10-14 16:53:35.143645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.520 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.153459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.153512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.153526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.153533] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.153539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.153553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.163495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.163576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.163590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.163596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.163606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.163621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.173526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.173614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.173627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.173634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.173640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.173654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.183540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.183590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.183607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.183613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.183619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.183637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.193546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.193596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.193613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.193619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.193625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.193639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.203615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.203672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.203685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.203691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.203697] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.203711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.213644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.213732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.213745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.213751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.213757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.213771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.223671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.223727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.223740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.223747] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.223752] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.223767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.233712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.233804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.233821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.233827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.233833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.233848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.243750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.243850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.243863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.243869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.243875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.243889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.253770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.253858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.253872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.253878] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.253884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.253898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.263774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.263821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.263833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.263840] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.263845] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.263859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.273812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.273878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.273891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.273897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.780 [2024-10-14 16:53:35.273903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.780 [2024-10-14 16:53:35.273924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.780 qpair failed and we were unable to recover it. 00:28:30.780 [2024-10-14 16:53:35.283838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.780 [2024-10-14 16:53:35.283895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.780 [2024-10-14 16:53:35.283909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.780 [2024-10-14 16:53:35.283915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.283921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.283935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.293862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.293913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.293926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.293933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.293938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.293952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.303886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.303935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.303947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.303953] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.303959] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.303973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.313891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.313968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.313980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.313987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.313992] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.314007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.323993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.324052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.324068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.324075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.324080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.324094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.333963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.334025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.334038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.334045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.334051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.334066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.344008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.344061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.344074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.344080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.344086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.344100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.354081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.354127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.354140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.354147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.354152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.354167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.364110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.364214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.364227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.364234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.364243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.364258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.374019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.374111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.374124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.374130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.374135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.374150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.384039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.384093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.384105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.384112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.384117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.384131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.394149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.394200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.394213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.394219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.394225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.394239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.404179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.404233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.404245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.404252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.404258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.404271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:30.781 [2024-10-14 16:53:35.414194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:30.781 [2024-10-14 16:53:35.414249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:30.781 [2024-10-14 16:53:35.414262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:30.781 [2024-10-14 16:53:35.414269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:30.781 [2024-10-14 16:53:35.414274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:30.781 [2024-10-14 16:53:35.414289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.781 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.424230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.424278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.424292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.424299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.424304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.424319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.434305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.434356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.434369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.434375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.434381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.434396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.444288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.444345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.444358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.444365] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.444371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.444386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.454321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.454381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.454394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.454401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.454410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.454425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.464379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.464428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.464441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.464447] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.464453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.464468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.474408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.474465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.474478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.474484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.474490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.474504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.484359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.484432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.484445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.484451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.484457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.484471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.494407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.494489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.042 [2024-10-14 16:53:35.494503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.042 [2024-10-14 16:53:35.494510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.042 [2024-10-14 16:53:35.494516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.042 [2024-10-14 16:53:35.494530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.042 qpair failed and we were unable to recover it. 00:28:31.042 [2024-10-14 16:53:35.504443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.042 [2024-10-14 16:53:35.504523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.504537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.504544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.504549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.504563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.514470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.514569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.514582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.514589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.514595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.514613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.524483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.524552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.524566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.524572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.524578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.524592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.534518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.534574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.534588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.534596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.534609] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.534624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.544565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.544650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.544662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.544673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.544678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.544693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.554610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.554699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.554712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.554718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.554724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.554738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.564629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.564687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.564700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.564706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.564712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.564726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.574572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.574633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.574646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.574653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.574658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.574673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.584700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.584754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.584767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.584774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.584780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.584794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.594632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.594694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.594707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.594714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.594720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.594734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.604729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.604784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.604797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.604803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.604809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.604823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.614757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.614811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.614823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.614830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.614835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.614849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.624771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.624822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.624835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.624842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.624847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.624862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.634788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.634840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.634857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.043 [2024-10-14 16:53:35.634863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.043 [2024-10-14 16:53:35.634869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.043 [2024-10-14 16:53:35.634883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.043 qpair failed and we were unable to recover it. 00:28:31.043 [2024-10-14 16:53:35.644861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.043 [2024-10-14 16:53:35.644952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.043 [2024-10-14 16:53:35.644965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.044 [2024-10-14 16:53:35.644972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.044 [2024-10-14 16:53:35.644977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.044 [2024-10-14 16:53:35.644991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.044 qpair failed and we were unable to recover it. 00:28:31.044 [2024-10-14 16:53:35.654862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.044 [2024-10-14 16:53:35.654949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.044 [2024-10-14 16:53:35.654962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.044 [2024-10-14 16:53:35.654969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.044 [2024-10-14 16:53:35.654974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.044 [2024-10-14 16:53:35.654988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.044 qpair failed and we were unable to recover it. 00:28:31.044 [2024-10-14 16:53:35.664896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.044 [2024-10-14 16:53:35.664993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.044 [2024-10-14 16:53:35.665006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.044 [2024-10-14 16:53:35.665012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.044 [2024-10-14 16:53:35.665018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.044 [2024-10-14 16:53:35.665032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.044 qpair failed and we were unable to recover it. 00:28:31.044 [2024-10-14 16:53:35.674936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.044 [2024-10-14 16:53:35.674991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.044 [2024-10-14 16:53:35.675004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.044 [2024-10-14 16:53:35.675011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.044 [2024-10-14 16:53:35.675016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.044 [2024-10-14 16:53:35.675030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.044 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.684987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.685047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.685060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.685067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.685072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.685087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.695009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.695098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.695111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.695117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.695123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.695137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.705004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.705072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.705085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.705092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.705098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.705112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.714951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.715002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.715015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.715021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.715027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.715041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.724988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.725056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.725072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.725079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.725085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.725099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.735012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.735068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.735082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.735089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.735095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.735109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.745134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.745189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.304 [2024-10-14 16:53:35.745201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.304 [2024-10-14 16:53:35.745208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.304 [2024-10-14 16:53:35.745214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.304 [2024-10-14 16:53:35.745228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-10-14 16:53:35.755164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.304 [2024-10-14 16:53:35.755214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.755227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.755233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.755239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.755253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.765232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.765320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.765333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.765340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.765346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.765363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.775233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.775290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.775302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.775309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.775315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.775329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.785211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.785265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.785278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.785284] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.785290] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.785304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.795242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.795297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.795310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.795316] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.795322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.795336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.805232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.805285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.805297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.805304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.805310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.805324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.815315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.815366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.815381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.815388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.815394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.815408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.825273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.825325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.825338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.825344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.825350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.825364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.835375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.835424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.835437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.835444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.835449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.835464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.845371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.845436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.845449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.845456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.845461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.845476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.855400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.855455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.855467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.855473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.855483] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.855496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.865473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.865528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.865541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.865547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.865553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.865567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.875458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.875524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.875537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.875543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.875549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.875564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.885454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.305 [2024-10-14 16:53:35.885510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.305 [2024-10-14 16:53:35.885523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.305 [2024-10-14 16:53:35.885530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.305 [2024-10-14 16:53:35.885536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.305 [2024-10-14 16:53:35.885550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-10-14 16:53:35.895534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.306 [2024-10-14 16:53:35.895636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.306 [2024-10-14 16:53:35.895649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.306 [2024-10-14 16:53:35.895655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.306 [2024-10-14 16:53:35.895661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.306 [2024-10-14 16:53:35.895677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-10-14 16:53:35.905510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.306 [2024-10-14 16:53:35.905567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.306 [2024-10-14 16:53:35.905581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.306 [2024-10-14 16:53:35.905587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.306 [2024-10-14 16:53:35.905593] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.306 [2024-10-14 16:53:35.905611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-10-14 16:53:35.915596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.306 [2024-10-14 16:53:35.915651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.306 [2024-10-14 16:53:35.915664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.306 [2024-10-14 16:53:35.915671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.306 [2024-10-14 16:53:35.915676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.306 [2024-10-14 16:53:35.915690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-10-14 16:53:35.925642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.306 [2024-10-14 16:53:35.925696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.306 [2024-10-14 16:53:35.925709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.306 [2024-10-14 16:53:35.925716] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.306 [2024-10-14 16:53:35.925722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.306 [2024-10-14 16:53:35.925736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-10-14 16:53:35.935670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.306 [2024-10-14 16:53:35.935738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.306 [2024-10-14 16:53:35.935752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.306 [2024-10-14 16:53:35.935758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.306 [2024-10-14 16:53:35.935764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.306 [2024-10-14 16:53:35.935778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.566 [2024-10-14 16:53:35.945688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.566 [2024-10-14 16:53:35.945741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.566 [2024-10-14 16:53:35.945754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.566 [2024-10-14 16:53:35.945761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.566 [2024-10-14 16:53:35.945770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.566 [2024-10-14 16:53:35.945785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.566 qpair failed and we were unable to recover it. 00:28:31.566 [2024-10-14 16:53:35.955734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.566 [2024-10-14 16:53:35.955788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.566 [2024-10-14 16:53:35.955801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.566 [2024-10-14 16:53:35.955808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.566 [2024-10-14 16:53:35.955814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.566 [2024-10-14 16:53:35.955828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.566 qpair failed and we were unable to recover it. 00:28:31.566 [2024-10-14 16:53:35.965796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.566 [2024-10-14 16:53:35.965852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.566 [2024-10-14 16:53:35.965865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.566 [2024-10-14 16:53:35.965872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.566 [2024-10-14 16:53:35.965877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.566 [2024-10-14 16:53:35.965891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.566 qpair failed and we were unable to recover it. 00:28:31.566 [2024-10-14 16:53:35.975780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.566 [2024-10-14 16:53:35.975831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.566 [2024-10-14 16:53:35.975844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.566 [2024-10-14 16:53:35.975850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.566 [2024-10-14 16:53:35.975856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.566 [2024-10-14 16:53:35.975871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.566 qpair failed and we were unable to recover it. 00:28:31.566 [2024-10-14 16:53:35.985800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.566 [2024-10-14 16:53:35.985896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.566 [2024-10-14 16:53:35.985908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.566 [2024-10-14 16:53:35.985915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.566 [2024-10-14 16:53:35.985920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.566 [2024-10-14 16:53:35.985934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.566 qpair failed and we were unable to recover it. 00:28:31.566 [2024-10-14 16:53:35.995820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:35.995872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:35.995887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:35.995894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:35.995900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:35.995915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.005858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.005913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.005926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.005933] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.005939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.005953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.015920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.015969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.015982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.015988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.015994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.016008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.025912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.025966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.025979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.025985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.025991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.026005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.035930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.035983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.035997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.036010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.036015] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.036029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.045999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.046057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.046070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.046076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.046082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.046096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.055997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.056079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.056093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.056100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.056106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.056121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.065991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.066041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.066054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.066061] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.066067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.066081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.076042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.076096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.076109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.076115] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.076121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.076135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.086073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.086126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.086139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.086145] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.086151] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.086166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.096095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.096147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.096159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.096165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.096171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.096185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.106151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.106215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.106229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.106235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.106241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.106255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.116186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.116244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.116257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.116263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.567 [2024-10-14 16:53:36.116269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.567 [2024-10-14 16:53:36.116283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.567 qpair failed and we were unable to recover it. 00:28:31.567 [2024-10-14 16:53:36.126174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.567 [2024-10-14 16:53:36.126233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.567 [2024-10-14 16:53:36.126246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.567 [2024-10-14 16:53:36.126256] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.126262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.126276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.136239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.136321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.136335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.136342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.136348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.136362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.146242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.146295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.146308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.146315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.146321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.146335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.156306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.156365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.156377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.156384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.156390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.156403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.166303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.166354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.166367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.166374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.166380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.166394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.176475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.176550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.176563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.176569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.176575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.176589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.186422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.186493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.186506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.186513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.186519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.186533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.568 [2024-10-14 16:53:36.196361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.568 [2024-10-14 16:53:36.196415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.568 [2024-10-14 16:53:36.196428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.568 [2024-10-14 16:53:36.196434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.568 [2024-10-14 16:53:36.196440] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.568 [2024-10-14 16:53:36.196454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.568 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.206466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.206534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.206547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.206555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.206561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.206575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.216370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.216421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.216437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.216444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.216450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.216464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.226393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.226454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.226467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.226474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.226480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.226493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.236538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.236595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.236613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.236620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.236625] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.236640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.246537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.246633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.246646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.246652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.246658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.246673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.256548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.256605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.256618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.256624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.256630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.256648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.266582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.266636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.266650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.266656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.266662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.266676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.276533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.276589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.276606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.276613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.276619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.276633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.286575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.286639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.286652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.286658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.286664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.286679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.296666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.296722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.296734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.296741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.296747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.296760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.306686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.306758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.306774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.306781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.306787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.306801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.316730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.316797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.316810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.316816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.316822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.316836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.326753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.326806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.326819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.326825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.326831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.326845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.336767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.336848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.828 [2024-10-14 16:53:36.336861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.828 [2024-10-14 16:53:36.336867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.828 [2024-10-14 16:53:36.336873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.828 [2024-10-14 16:53:36.336887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.828 qpair failed and we were unable to recover it. 00:28:31.828 [2024-10-14 16:53:36.346798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.828 [2024-10-14 16:53:36.346848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.346861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.346868] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.346876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.346890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.356855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.356909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.356922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.356928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.356934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.356948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.366794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.366847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.366860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.366867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.366873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.366888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.376915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.376970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.376983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.376989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.376995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.377009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.386891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.386946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.386959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.386965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.386971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.386985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.396991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.397088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.397100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.397107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.397112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.397126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.407007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.407109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.407122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.407128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.407134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.407147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.416999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.417048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.417061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.417067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.417073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.417087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.427021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.427074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.427087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.427093] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.427099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.427113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.437051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.437100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.437113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.437119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.437128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.437142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.447075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.447131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.447144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.447151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.447156] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.447170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:31.829 [2024-10-14 16:53:36.457112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:31.829 [2024-10-14 16:53:36.457163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:31.829 [2024-10-14 16:53:36.457175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:31.829 [2024-10-14 16:53:36.457181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:31.829 [2024-10-14 16:53:36.457187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:31.829 [2024-10-14 16:53:36.457202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:31.829 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.467194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.467247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.467260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.467267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.467273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.467288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.477225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.477280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.477293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.477299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.477305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.477319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.487172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.487238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.487250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.487257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.487263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.487277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.497224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.497280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.497293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.497299] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.497305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.497319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.507258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.507311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.507324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.507330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.507336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.507350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.517281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.517333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.517346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.517352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.517358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.517373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.527305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.527360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.527374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.527384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.527390] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.527405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.537302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.537355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.537368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.537374] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.537380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.088 [2024-10-14 16:53:36.537395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.088 qpair failed and we were unable to recover it. 00:28:32.088 [2024-10-14 16:53:36.547368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.088 [2024-10-14 16:53:36.547423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.088 [2024-10-14 16:53:36.547436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.088 [2024-10-14 16:53:36.547442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.088 [2024-10-14 16:53:36.547448] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.547462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.557401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.557455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.557468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.557475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.557480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.557494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.567424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.567493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.567506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.567512] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.567518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.567532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.577444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.577494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.577507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.577513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.577520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.577533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.587472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.587523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.587535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.587542] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.587548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.587561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.597531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.597614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.597627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.597633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.597639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.597653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.607531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.607583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.607596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.607606] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.607612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.607626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.617560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.617616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.617630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.617640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.617646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.617660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.627638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.627692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.627705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.627712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.627717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.627732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.637651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.637710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.637723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.637729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.637735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.637750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.647656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.647714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.647726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.647733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.647739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.647754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.657663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.657718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.657731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.657737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.657743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.657757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.667674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.667766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.667779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.667785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.667791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.667804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.677752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.089 [2024-10-14 16:53:36.677803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.089 [2024-10-14 16:53:36.677815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.089 [2024-10-14 16:53:36.677821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.089 [2024-10-14 16:53:36.677827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.089 [2024-10-14 16:53:36.677842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.089 qpair failed and we were unable to recover it. 00:28:32.089 [2024-10-14 16:53:36.687686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.090 [2024-10-14 16:53:36.687740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.090 [2024-10-14 16:53:36.687752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.090 [2024-10-14 16:53:36.687759] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.090 [2024-10-14 16:53:36.687765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.090 [2024-10-14 16:53:36.687779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.090 qpair failed and we were unable to recover it. 00:28:32.090 [2024-10-14 16:53:36.697779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.090 [2024-10-14 16:53:36.697834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.090 [2024-10-14 16:53:36.697846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.090 [2024-10-14 16:53:36.697852] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.090 [2024-10-14 16:53:36.697858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.090 [2024-10-14 16:53:36.697872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.090 qpair failed and we were unable to recover it. 00:28:32.090 [2024-10-14 16:53:36.707864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.090 [2024-10-14 16:53:36.707923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.090 [2024-10-14 16:53:36.707939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.090 [2024-10-14 16:53:36.707946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.090 [2024-10-14 16:53:36.707951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.090 [2024-10-14 16:53:36.707965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.090 qpair failed and we were unable to recover it. 00:28:32.090 [2024-10-14 16:53:36.717842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.090 [2024-10-14 16:53:36.717929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.090 [2024-10-14 16:53:36.717942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.090 [2024-10-14 16:53:36.717948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.090 [2024-10-14 16:53:36.717954] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.090 [2024-10-14 16:53:36.717968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.090 qpair failed and we were unable to recover it. 00:28:32.348 [2024-10-14 16:53:36.727928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.348 [2024-10-14 16:53:36.727986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.348 [2024-10-14 16:53:36.727999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.348 [2024-10-14 16:53:36.728006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.348 [2024-10-14 16:53:36.728012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.348 [2024-10-14 16:53:36.728026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.348 qpair failed and we were unable to recover it. 00:28:32.348 [2024-10-14 16:53:36.737897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.348 [2024-10-14 16:53:36.737952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.348 [2024-10-14 16:53:36.737965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.348 [2024-10-14 16:53:36.737971] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.348 [2024-10-14 16:53:36.737977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.348 [2024-10-14 16:53:36.737991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.348 qpair failed and we were unable to recover it. 00:28:32.348 [2024-10-14 16:53:36.747950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.348 [2024-10-14 16:53:36.748001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.748014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.748021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.748027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.748044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.757947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.758003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.758015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.758022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.758028] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.758042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.768009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.768066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.768078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.768084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.768091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.768105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.777980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.778079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.778092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.778098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.778104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.778118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.787977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.788066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.788079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.788085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.788091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.788104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.798063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.798162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.798181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.798188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.798194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.798208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.808158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.808257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.808270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.808276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.808282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.808297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.818158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.818242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.818254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.818261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.818267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.818281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.828162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.828213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.828226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.828233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.828238] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.828252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.838160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.838211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.838224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.838230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.838236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.838253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.848208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.848263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.848275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.848282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.848287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.848301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.858233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.349 [2024-10-14 16:53:36.858319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.349 [2024-10-14 16:53:36.858332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.349 [2024-10-14 16:53:36.858338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.349 [2024-10-14 16:53:36.858343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.349 [2024-10-14 16:53:36.858357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.349 qpair failed and we were unable to recover it. 00:28:32.349 [2024-10-14 16:53:36.868253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.868306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.868319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.868325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.868331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.868345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.878283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.878335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.878347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.878354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.878360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.878374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.888381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.888435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.888451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.888457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.888463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.888477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.898357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.898410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.898423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.898430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.898435] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.898449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.908429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.908490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.908503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.908510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.908516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.908530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.918394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.918472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.918484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.918491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.918496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.918510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.928426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.928500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.928513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.928520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.928529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.928543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.938455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.938506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.938519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.938526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.938532] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.938546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.948414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.948470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.948484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.948490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.948496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.948510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.958501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.958557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.958571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.958578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.958584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.958598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.968519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.968579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.968594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.968605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.968612] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.968627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.350 [2024-10-14 16:53:36.978562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.350 [2024-10-14 16:53:36.978619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.350 [2024-10-14 16:53:36.978631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.350 [2024-10-14 16:53:36.978638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.350 [2024-10-14 16:53:36.978643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.350 [2024-10-14 16:53:36.978657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.350 qpair failed and we were unable to recover it. 00:28:32.609 [2024-10-14 16:53:36.988613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-10-14 16:53:36.988685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-10-14 16:53:36.988697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-10-14 16:53:36.988704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-10-14 16:53:36.988710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.609 [2024-10-14 16:53:36.988724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-10-14 16:53:36.998633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-10-14 16:53:36.998685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-10-14 16:53:36.998699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-10-14 16:53:36.998705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-10-14 16:53:36.998711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.609 [2024-10-14 16:53:36.998726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-10-14 16:53:37.008685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-10-14 16:53:37.008740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-10-14 16:53:37.008753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-10-14 16:53:37.008760] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-10-14 16:53:37.008766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.609 [2024-10-14 16:53:37.008780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-10-14 16:53:37.018679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-10-14 16:53:37.018731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-10-14 16:53:37.018744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-10-14 16:53:37.018755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-10-14 16:53:37.018760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.609 [2024-10-14 16:53:37.018775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-10-14 16:53:37.028636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-10-14 16:53:37.028691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-10-14 16:53:37.028703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-10-14 16:53:37.028710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.028716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.028730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.038776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.038828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.038841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.038848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.038853] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.038867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.048780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.048834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.048847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.048853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.048859] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.048874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.058804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.058884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.058898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.058905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.058911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.058926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.068752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.068809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.068822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.068828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.068834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.068848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.078853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.078907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.078920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.078926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.078932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.078946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.088819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.088874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.088888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.088894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.088900] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.088915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.098883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.098938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.098950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.098957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.098963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.098977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.108920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.108968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.108981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.108991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.108997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.109011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.118938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.119019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.119032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.119038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.119044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.119058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.128989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.129043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.129057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.129063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.129069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.129084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.139000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.139069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.139082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.139088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.139094] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.139108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.149067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.149117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.149130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.149136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.149142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.149156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.159057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.159125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.159138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.159144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.159150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.610 [2024-10-14 16:53:37.159164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-10-14 16:53:37.169033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-10-14 16:53:37.169090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-10-14 16:53:37.169103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-10-14 16:53:37.169109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-10-14 16:53:37.169115] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.169128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.179178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.179242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.179255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.179261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.179267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.179281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.189097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.189156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.189169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.189175] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.189181] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.189196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.199185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.199238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.199255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.199262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.199268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.199282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.209203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.209262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.209275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.209282] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.209288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.209302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.219238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.219331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.219345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.219351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.219357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.219371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.229276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.229334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.229348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.229354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.229360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.229375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.611 [2024-10-14 16:53:37.239277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.611 [2024-10-14 16:53:37.239378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.611 [2024-10-14 16:53:37.239392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.611 [2024-10-14 16:53:37.239398] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.611 [2024-10-14 16:53:37.239405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.611 [2024-10-14 16:53:37.239424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.611 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.249265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.249322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.249335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.249342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.249348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.249362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.259397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.259475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.259488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.259495] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.259500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.259514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.269373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.269425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.269438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.269446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.269452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.269467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.279409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.279459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.279471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.279478] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.279484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.279498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.289444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.289537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.289553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.289560] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.289565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.289579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.299416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.299469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.299483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.299489] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.299495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.299510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.309504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.309556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.309569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-10-14 16:53:37.309576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-10-14 16:53:37.309581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.870 [2024-10-14 16:53:37.309596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-10-14 16:53:37.319576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-10-14 16:53:37.319635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-10-14 16:53:37.319648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.319654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.319660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.319674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.329490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.329544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.329557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.329564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.329570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.329587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.339583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.339639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.339652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.339658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.339664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.339678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.349631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.349687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.349700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.349706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.349712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.349727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.359669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.359726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.359739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.359745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.359751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.359765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.369672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.369770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.369783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.369789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.369795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.369810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.379702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.379763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.379780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.379786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.379792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.379806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.389727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.389776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.389789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.389796] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.389802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.389817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.399752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.399805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.399818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.399825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.399830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.399845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.409835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.409941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.409953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.409960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.409966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.409980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.419874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.419935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.419948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.419955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.419964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.419979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.429839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.429893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.429906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.429913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.429919] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.429933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.439869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.439922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.439935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.439941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.439947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.439961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.449917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.449974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.449987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.449994] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.450000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.871 [2024-10-14 16:53:37.450015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-10-14 16:53:37.459947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-10-14 16:53:37.460005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-10-14 16:53:37.460018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-10-14 16:53:37.460024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-10-14 16:53:37.460030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.872 [2024-10-14 16:53:37.460043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-10-14 16:53:37.469955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-10-14 16:53:37.470006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-10-14 16:53:37.470019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-10-14 16:53:37.470026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-10-14 16:53:37.470032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.872 [2024-10-14 16:53:37.470045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-10-14 16:53:37.479990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-10-14 16:53:37.480044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-10-14 16:53:37.480056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-10-14 16:53:37.480063] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-10-14 16:53:37.480069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.872 [2024-10-14 16:53:37.480083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-10-14 16:53:37.489952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-10-14 16:53:37.490019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-10-14 16:53:37.490032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-10-14 16:53:37.490039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-10-14 16:53:37.490045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.872 [2024-10-14 16:53:37.490059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-10-14 16:53:37.500045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-10-14 16:53:37.500100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-10-14 16:53:37.500112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-10-14 16:53:37.500119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-10-14 16:53:37.500125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:32.872 [2024-10-14 16:53:37.500139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:32.872 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.510087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.510136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.510149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.510156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.510165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.510179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.520099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.520148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.520160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.520167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.520173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.520187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.530063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.530118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.530131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.530138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.530143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.530157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.540180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.540234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.540248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.540254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.540260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.540274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.550196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.550248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.550261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.550267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.550274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.550288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.560211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.560291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.560304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.560311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.560317] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.560330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.570308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.570410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.570423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.570430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.570436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.570450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.580267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.580324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.580336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.580342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.580348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.580363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.590297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.590352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-10-14 16:53:37.590365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-10-14 16:53:37.590372] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-10-14 16:53:37.590378] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.132 [2024-10-14 16:53:37.590392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-10-14 16:53:37.600333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-10-14 16:53:37.600384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.600397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.600407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.600412] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.600427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.610374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.610473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.610486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.610492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.610498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.610512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.620417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.620473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.620486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.620493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.620499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.620513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.630398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.630448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.630461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.630468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.630474] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.630488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.640435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.640490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.640503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.640510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.640516] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.640531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.650469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.650534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.650548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.650554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.650560] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.650574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.660497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.660554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.660566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.660572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.660578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.660593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.670521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.670573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.670587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.670593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.670599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.670617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.680548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.680647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.680660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.680666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.680672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.680686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.690588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.690648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.690661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.690670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.690677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.690691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.700621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.700673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.700686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.700692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.700698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.700712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.710691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.710754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-10-14 16:53:37.710767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-10-14 16:53:37.710774] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-10-14 16:53:37.710779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.133 [2024-10-14 16:53:37.710794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-10-14 16:53:37.720705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-10-14 16:53:37.720758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.134 [2024-10-14 16:53:37.720771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.134 [2024-10-14 16:53:37.720777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.134 [2024-10-14 16:53:37.720783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.134 [2024-10-14 16:53:37.720797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-10-14 16:53:37.730699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.134 [2024-10-14 16:53:37.730771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.134 [2024-10-14 16:53:37.730784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.134 [2024-10-14 16:53:37.730791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.134 [2024-10-14 16:53:37.730797] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.134 [2024-10-14 16:53:37.730811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-10-14 16:53:37.740733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.134 [2024-10-14 16:53:37.740788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.134 [2024-10-14 16:53:37.740801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.134 [2024-10-14 16:53:37.740807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.134 [2024-10-14 16:53:37.740813] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.134 [2024-10-14 16:53:37.740827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-10-14 16:53:37.750756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.134 [2024-10-14 16:53:37.750813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.134 [2024-10-14 16:53:37.750826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.134 [2024-10-14 16:53:37.750833] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.134 [2024-10-14 16:53:37.750838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.134 [2024-10-14 16:53:37.750852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.134 [2024-10-14 16:53:37.760793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.134 [2024-10-14 16:53:37.760847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.134 [2024-10-14 16:53:37.760861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.134 [2024-10-14 16:53:37.760869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.134 [2024-10-14 16:53:37.760874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.134 [2024-10-14 16:53:37.760888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.134 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.770828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.770913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.770926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.770932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.770938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.770953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.780825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.780898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.780914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.780920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.780926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.780940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.790863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.790915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.790928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.790934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.790940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.790954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.800890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.800941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.800954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.800960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.800966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.800980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.810978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.811072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.811085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.811091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.811097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.811111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.820950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.821008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.821021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.821028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.821033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.821050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.830978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.831026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.831039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.831046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.831051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.831066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.841004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.402 [2024-10-14 16:53:37.841054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.402 [2024-10-14 16:53:37.841068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.402 [2024-10-14 16:53:37.841074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.402 [2024-10-14 16:53:37.841080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.402 [2024-10-14 16:53:37.841095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.402 qpair failed and we were unable to recover it. 00:28:33.402 [2024-10-14 16:53:37.851038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.851109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.851121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.851128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.851133] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.851147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.861054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.861105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.861118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.861124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.861130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.861144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.871075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.871126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.871142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.871149] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.871155] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.871169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.881038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.881089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.881101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.881108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.881113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.881128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.891186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.891241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.891254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.891261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.891267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.891281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.901154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.901242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.901254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.901261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.901266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.901280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.911205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.911253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.911266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.911272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.911281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.911295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.921269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.921325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.921337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.921344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.921349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.921364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.931268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.931322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.931335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.931341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.931347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.931361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.941282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.941338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.941350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.941356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.941362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.941376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.951317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.951395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.951408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.951415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.951420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.951434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.961339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.961392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.961405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.961412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.961418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.961431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.971420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.971520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.971533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.971540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.971546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.971560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.981400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.981453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.981466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.981473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.981479] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.981493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:37.991439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:37.991487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:37.991500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:37.991507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:37.991513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:37.991527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:38.001469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:38.001523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:38.001537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:38.001543] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:38.001552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:38.001567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:38.011493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:38.011545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:38.011558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:38.011565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:38.011571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:38.011585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:38.021520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:38.021608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:38.021621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:38.021628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:38.021633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:38.021647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.403 [2024-10-14 16:53:38.031548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.403 [2024-10-14 16:53:38.031594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.403 [2024-10-14 16:53:38.031611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.403 [2024-10-14 16:53:38.031618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.403 [2024-10-14 16:53:38.031624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.403 [2024-10-14 16:53:38.031639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.403 qpair failed and we were unable to recover it. 00:28:33.661 [2024-10-14 16:53:38.041570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.661 [2024-10-14 16:53:38.041628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.661 [2024-10-14 16:53:38.041642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.661 [2024-10-14 16:53:38.041649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.661 [2024-10-14 16:53:38.041655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.041670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.051580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.051643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.051656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.051663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.051669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.051683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.061666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.061751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.061765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.061771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.061778] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.061793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.071664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.071737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.071750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.071756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.071762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.071776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.081702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.081773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.081786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.081792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.081798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.081812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.091745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.091800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.091812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.091822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.091827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.091841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.101765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.101840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.101853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.101859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.101865] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.101879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.111826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.111879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.111891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.111897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.111903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.111917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.121795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.121843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.121856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.121862] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.121868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.121882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.131873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.131929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.131942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.131949] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.131955] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.131969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.141871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.141921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.141934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.141940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.141946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.141961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.151907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.151955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.151968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.151975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.151980] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.151994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.161902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.161953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.161966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.161973] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.161979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.161993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.172019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.172122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.172135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.172141] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.172147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.172162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.662 qpair failed and we were unable to recover it. 00:28:33.662 [2024-10-14 16:53:38.182017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.662 [2024-10-14 16:53:38.182104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.662 [2024-10-14 16:53:38.182117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.662 [2024-10-14 16:53:38.182126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.662 [2024-10-14 16:53:38.182132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.662 [2024-10-14 16:53:38.182146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.191962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.192051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.192064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.192070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.192076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.192091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.202029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.202086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.202099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.202106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.202112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.202126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.212122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.212224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.212236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.212243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.212249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.212263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.222178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.222235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.222248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.222255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.222260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.222274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.232100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.232178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.232191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.232198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.232203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.232217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.242187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.242274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.242287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.242293] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.242299] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.242313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.252214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.252267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.252280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.252287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.252293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.252307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.262227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.262283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.262296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.262303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.262308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.262323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.272174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.272272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.272288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.272295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.272301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.272315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.282275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.282328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.282341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.282347] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.282353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.282367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.663 [2024-10-14 16:53:38.292386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.663 [2024-10-14 16:53:38.292489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.663 [2024-10-14 16:53:38.292501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.663 [2024-10-14 16:53:38.292507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.663 [2024-10-14 16:53:38.292513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.663 [2024-10-14 16:53:38.292527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.663 qpair failed and we were unable to recover it. 00:28:33.922 [2024-10-14 16:53:38.302272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.922 [2024-10-14 16:53:38.302325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.922 [2024-10-14 16:53:38.302338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.922 [2024-10-14 16:53:38.302345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.922 [2024-10-14 16:53:38.302351] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.922 [2024-10-14 16:53:38.302366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.922 qpair failed and we were unable to recover it. 00:28:33.922 [2024-10-14 16:53:38.312377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.922 [2024-10-14 16:53:38.312428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.922 [2024-10-14 16:53:38.312441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.922 [2024-10-14 16:53:38.312448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.922 [2024-10-14 16:53:38.312455] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.922 [2024-10-14 16:53:38.312475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.922 qpair failed and we were unable to recover it. 00:28:33.922 [2024-10-14 16:53:38.322397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.922 [2024-10-14 16:53:38.322453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.922 [2024-10-14 16:53:38.322466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.322473] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.322478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.322493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.332442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.332495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.332508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.332515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.332521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.332536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.342472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.342540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.342553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.342559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.342565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.342579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.352491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.352585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.352598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.352608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.352614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.352628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.362575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.362657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.362672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.362679] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.362684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.362698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.372583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.372644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.372658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.372664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.372670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.372684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.382487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.382547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.382560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.382568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.382574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.382590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.392594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.392651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.392664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.392670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.392676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.392690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.402617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.402670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.402683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.402689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.402694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.402712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.412658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.412713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.412726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.412733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.412739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.412754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.422627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.422685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.422698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.422705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.422711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.422726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.432650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.432707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.432720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.432727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.432732] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.432746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.442716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.442766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.442779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.442786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.442792] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.442806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.452756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.452845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.452861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.452867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.923 [2024-10-14 16:53:38.452873] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.923 [2024-10-14 16:53:38.452888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.923 qpair failed and we were unable to recover it. 00:28:33.923 [2024-10-14 16:53:38.462727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.923 [2024-10-14 16:53:38.462783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.923 [2024-10-14 16:53:38.462796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.923 [2024-10-14 16:53:38.462802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.462809] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.462823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.472819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.472891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.472904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.472910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.472916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.472930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.482858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.482912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.482924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.482931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.482937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.482951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.492932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.492986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.492999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.493005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.493014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.493029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.502955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.503005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.503018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.503024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.503030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.503045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.512864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.512913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.512926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.512932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.512938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.512952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.522966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.523041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.523053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.523060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.523066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.523080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.532994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.533091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.533104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.533110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.533116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.533130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.543054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.543137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.543150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.543156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.543163] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.543177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:33.924 [2024-10-14 16:53:38.552979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.924 [2024-10-14 16:53:38.553026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.924 [2024-10-14 16:53:38.553039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.924 [2024-10-14 16:53:38.553045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.924 [2024-10-14 16:53:38.553051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:33.924 [2024-10-14 16:53:38.553066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:33.924 qpair failed and we were unable to recover it. 00:28:34.184 [2024-10-14 16:53:38.563053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.184 [2024-10-14 16:53:38.563108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.184 [2024-10-14 16:53:38.563121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.184 [2024-10-14 16:53:38.563128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.184 [2024-10-14 16:53:38.563134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.184 [2024-10-14 16:53:38.563148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.184 qpair failed and we were unable to recover it. 00:28:34.184 [2024-10-14 16:53:38.573106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.184 [2024-10-14 16:53:38.573161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.184 [2024-10-14 16:53:38.573174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.184 [2024-10-14 16:53:38.573181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.184 [2024-10-14 16:53:38.573187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.184 [2024-10-14 16:53:38.573201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.184 qpair failed and we were unable to recover it. 00:28:34.184 [2024-10-14 16:53:38.583079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.184 [2024-10-14 16:53:38.583129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.184 [2024-10-14 16:53:38.583142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.184 [2024-10-14 16:53:38.583151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.184 [2024-10-14 16:53:38.583157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.184 [2024-10-14 16:53:38.583171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.184 qpair failed and we were unable to recover it. 00:28:34.184 [2024-10-14 16:53:38.593170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.184 [2024-10-14 16:53:38.593243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.184 [2024-10-14 16:53:38.593256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.184 [2024-10-14 16:53:38.593262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.184 [2024-10-14 16:53:38.593268] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.184 [2024-10-14 16:53:38.593282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.184 qpair failed and we were unable to recover it. 00:28:34.184 [2024-10-14 16:53:38.603129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.184 [2024-10-14 16:53:38.603186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.184 [2024-10-14 16:53:38.603199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.184 [2024-10-14 16:53:38.603206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.184 [2024-10-14 16:53:38.603212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.184 [2024-10-14 16:53:38.603226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.184 qpair failed and we were unable to recover it. 00:28:34.184 [2024-10-14 16:53:38.613226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.613284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.613297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.613304] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.613309] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.613323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.623216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.623269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.623282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.623288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.623294] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.623308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.633309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.633410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.633424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.633430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.633436] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.633450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.643340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.643389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.643401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.643407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.643413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.643428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.653364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.653421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.653433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.653440] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.653446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.653460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.663315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.663367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.663380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.663386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.663392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.663406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.673468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.673524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.673538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.673547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.673553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.673567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.683368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.683422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.683435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.683441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.683447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.683461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.693408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.693461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.693474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.693480] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.693486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.693500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.703419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.703479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.703492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.703498] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.703504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.703518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.713532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.713587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.713604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.713611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.713617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.713631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.723584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.723642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.723655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.723661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.723667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.723682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.733590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.733651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.733664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.733671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.733677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.733692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.743555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.185 [2024-10-14 16:53:38.743646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.185 [2024-10-14 16:53:38.743659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.185 [2024-10-14 16:53:38.743666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.185 [2024-10-14 16:53:38.743672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.185 [2024-10-14 16:53:38.743686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.185 qpair failed and we were unable to recover it. 00:28:34.185 [2024-10-14 16:53:38.753672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.753754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.753766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.753773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.753779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.753793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.186 [2024-10-14 16:53:38.763641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.763698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.763714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.763720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.763726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.763740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.186 [2024-10-14 16:53:38.773707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.773763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.773776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.773782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.773788] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.773802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.186 [2024-10-14 16:53:38.783714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.783775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.783788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.783795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.783801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.783815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.186 [2024-10-14 16:53:38.793698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.793778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.793792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.793798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.793804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.793817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.186 [2024-10-14 16:53:38.803821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.803876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.803889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.803895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.803901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.803918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.186 [2024-10-14 16:53:38.813872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.186 [2024-10-14 16:53:38.813971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.186 [2024-10-14 16:53:38.813983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.186 [2024-10-14 16:53:38.813990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.186 [2024-10-14 16:53:38.813995] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.186 [2024-10-14 16:53:38.814009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.186 qpair failed and we were unable to recover it. 00:28:34.445 [2024-10-14 16:53:38.823765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.445 [2024-10-14 16:53:38.823821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.445 [2024-10-14 16:53:38.823834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.445 [2024-10-14 16:53:38.823841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.445 [2024-10-14 16:53:38.823847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.445 [2024-10-14 16:53:38.823861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-10-14 16:53:38.833887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.445 [2024-10-14 16:53:38.833937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.445 [2024-10-14 16:53:38.833950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.445 [2024-10-14 16:53:38.833957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.445 [2024-10-14 16:53:38.833963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.445 [2024-10-14 16:53:38.833977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-10-14 16:53:38.843912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.445 [2024-10-14 16:53:38.843965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.445 [2024-10-14 16:53:38.843978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.445 [2024-10-14 16:53:38.843985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.445 [2024-10-14 16:53:38.843991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.445 [2024-10-14 16:53:38.844005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-10-14 16:53:38.853948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.445 [2024-10-14 16:53:38.854006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.445 [2024-10-14 16:53:38.854021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.445 [2024-10-14 16:53:38.854028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.445 [2024-10-14 16:53:38.854034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.445 [2024-10-14 16:53:38.854048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-10-14 16:53:38.863971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.445 [2024-10-14 16:53:38.864024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.445 [2024-10-14 16:53:38.864037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.445 [2024-10-14 16:53:38.864043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.445 [2024-10-14 16:53:38.864049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.445 [2024-10-14 16:53:38.864063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.445 qpair failed and we were unable to recover it. 00:28:34.445 [2024-10-14 16:53:38.874035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.445 [2024-10-14 16:53:38.874089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.445 [2024-10-14 16:53:38.874101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.874108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.874114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.874128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.884066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.884132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.884145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.884151] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.884157] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.884171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.894065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.894118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.894132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.894138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.894144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.894162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.904108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.904181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.904195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.904201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.904207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.904222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.914120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.914173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.914187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.914193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.914199] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.914214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.924203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.924259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.924271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.924278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.924284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.924298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.934178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.934231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.934245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.934251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.934257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.934271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.944236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.944328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.944344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.944350] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.944356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.944370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.954239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.954291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.954304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.954310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.954316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.954330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.964253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.964308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.964321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.964328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.964333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.964347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.974272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.974326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.974339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.974346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.974352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.974366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.984315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.984366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.984379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.984385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.984394] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.984409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:38.994272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:38.994336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:38.994350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:38.994357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:38.994362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:38.994377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:39.004328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:39.004382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:39.004396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:39.004403] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:39.004409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.446 [2024-10-14 16:53:39.004424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.446 qpair failed and we were unable to recover it. 00:28:34.446 [2024-10-14 16:53:39.014402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.446 [2024-10-14 16:53:39.014457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.446 [2024-10-14 16:53:39.014469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.446 [2024-10-14 16:53:39.014476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.446 [2024-10-14 16:53:39.014482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.014496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-10-14 16:53:39.024473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.447 [2024-10-14 16:53:39.024548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.447 [2024-10-14 16:53:39.024561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.447 [2024-10-14 16:53:39.024568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.447 [2024-10-14 16:53:39.024574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.024588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-10-14 16:53:39.034464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.447 [2024-10-14 16:53:39.034524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.447 [2024-10-14 16:53:39.034537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.447 [2024-10-14 16:53:39.034544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.447 [2024-10-14 16:53:39.034550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.034564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-10-14 16:53:39.044529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.447 [2024-10-14 16:53:39.044585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.447 [2024-10-14 16:53:39.044599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.447 [2024-10-14 16:53:39.044610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.447 [2024-10-14 16:53:39.044616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.044630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-10-14 16:53:39.054524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.447 [2024-10-14 16:53:39.054577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.447 [2024-10-14 16:53:39.054590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.447 [2024-10-14 16:53:39.054597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.447 [2024-10-14 16:53:39.054606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.054621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-10-14 16:53:39.064545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.447 [2024-10-14 16:53:39.064605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.447 [2024-10-14 16:53:39.064620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.447 [2024-10-14 16:53:39.064626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.447 [2024-10-14 16:53:39.064632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.064647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.447 [2024-10-14 16:53:39.074614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.447 [2024-10-14 16:53:39.074671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.447 [2024-10-14 16:53:39.074684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.447 [2024-10-14 16:53:39.074691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.447 [2024-10-14 16:53:39.074703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.447 [2024-10-14 16:53:39.074719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.447 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.084597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.084656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.084674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.084682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.084688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.084707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.094638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.094693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.094707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.094714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.094720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.094735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.104657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.104737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.104750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.104757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.104762] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.104778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.114712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.114774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.114788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.114794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.114800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.114815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.124723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.124778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.124791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.124798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.124804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.124819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.134807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.134878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.134891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.134897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.134903] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.134919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.144805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.144891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.144904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.144910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.144916] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.144929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.154866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.154922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.154935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.154941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.154948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.154962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.164831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.164883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.164895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.164905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.164911] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.164925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.174870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.174925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.174938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.174945] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.174951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.174964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.184892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.184945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.184958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.184964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.184970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.707 [2024-10-14 16:53:39.184984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.707 qpair failed and we were unable to recover it. 00:28:34.707 [2024-10-14 16:53:39.194914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.707 [2024-10-14 16:53:39.194968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.707 [2024-10-14 16:53:39.194981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.707 [2024-10-14 16:53:39.194987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.707 [2024-10-14 16:53:39.194993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.195007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.204945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.204997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.205009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.205016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.205022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.205036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.215006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.215063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.215076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.215083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.215088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.215102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.225004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.225058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.225071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.225077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.225083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.225097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.235023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.235081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.235094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.235101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.235107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.235121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.245089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.245146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.245158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.245164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.245170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.245184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.255078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.255132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.255145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.255154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.255160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.255174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.265100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.265151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.265163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.265170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.265176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.265190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.275123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.275177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.275191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.275197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.275203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.275217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.285204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.285253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.285266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.285272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.285278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.285292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.295205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.295263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.295276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.295283] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.295289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.295302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.305264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.305321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.305334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.305341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.305347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.305361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.315252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.315320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.315333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.315339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.315345] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.315359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.325287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.325352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.325365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.325371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.325377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.325392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.708 [2024-10-14 16:53:39.335316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.708 [2024-10-14 16:53:39.335376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.708 [2024-10-14 16:53:39.335390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.708 [2024-10-14 16:53:39.335397] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.708 [2024-10-14 16:53:39.335402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.708 [2024-10-14 16:53:39.335417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.708 qpair failed and we were unable to recover it. 00:28:34.968 [2024-10-14 16:53:39.345426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.968 [2024-10-14 16:53:39.345480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.968 [2024-10-14 16:53:39.345496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.968 [2024-10-14 16:53:39.345502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.968 [2024-10-14 16:53:39.345508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7120000b90 00:28:34.968 [2024-10-14 16:53:39.345523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:34.968 qpair failed and we were unable to recover it. 00:28:34.968 [2024-10-14 16:53:39.355391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.968 [2024-10-14 16:53:39.355482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.968 [2024-10-14 16:53:39.355537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.968 [2024-10-14 16:53:39.355563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.968 [2024-10-14 16:53:39.355585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f712c000b90 00:28:34.968 [2024-10-14 16:53:39.355651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.968 qpair failed and we were unable to recover it. 00:28:34.968 [2024-10-14 16:53:39.365404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.968 [2024-10-14 16:53:39.365482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.968 [2024-10-14 16:53:39.365511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.968 [2024-10-14 16:53:39.365526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.968 [2024-10-14 16:53:39.365539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f712c000b90 00:28:34.968 [2024-10-14 16:53:39.365568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:34.968 qpair failed and we were unable to recover it. 00:28:34.968 [2024-10-14 16:53:39.375490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.968 [2024-10-14 16:53:39.375613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.968 [2024-10-14 16:53:39.375676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.968 [2024-10-14 16:53:39.375703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.968 [2024-10-14 16:53:39.375725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c6dc60 00:28:34.968 [2024-10-14 16:53:39.375774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.968 qpair failed and we were unable to recover it. 00:28:34.968 [2024-10-14 16:53:39.385438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.968 [2024-10-14 16:53:39.385516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.968 [2024-10-14 16:53:39.385544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.968 [2024-10-14 16:53:39.385559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.968 [2024-10-14 16:53:39.385572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c6dc60 00:28:34.968 [2024-10-14 16:53:39.385618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:34.968 qpair failed and we were unable to recover it. 00:28:34.968 [2024-10-14 16:53:39.385736] nvme_ctrlr.c:4536:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:34.968 A controller has encountered a failure and is being reset. 00:28:34.968 Controller properly reset. 00:28:34.968 [2024-10-14 16:53:39.406344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dbb90 is same with the state(6) to be set 00:28:34.968 Initializing NVMe Controllers 00:28:34.968 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:34.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:34.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:34.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:34.968 Initialization complete. Launching workers. 00:28:34.968 Starting thread on core 1 00:28:34.968 Starting thread on core 2 00:28:34.968 Starting thread on core 3 00:28:34.968 Starting thread on core 0 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:34.968 00:28:34.968 real 0m10.842s 00:28:34.968 user 0m19.013s 00:28:34.968 sys 0m4.606s 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.968 ************************************ 00:28:34.968 END TEST nvmf_target_disconnect_tc2 00:28:34.968 ************************************ 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.968 rmmod nvme_tcp 00:28:34.968 rmmod nvme_fabrics 00:28:34.968 rmmod nvme_keyring 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 700111 ']' 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 700111 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 700111 ']' 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 700111 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 700111 00:28:34.968 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:34.969 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:34.969 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 700111' 00:28:34.969 killing process with pid 700111 00:28:34.969 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 700111 00:28:34.969 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 700111 00:28:35.227 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:35.227 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:35.227 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.228 16:53:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.762 16:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:37.762 00:28:37.762 real 0m19.590s 00:28:37.762 user 0m46.939s 00:28:37.762 sys 0m9.508s 00:28:37.762 16:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.762 16:53:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:37.762 ************************************ 00:28:37.762 END TEST nvmf_target_disconnect 00:28:37.762 ************************************ 00:28:37.762 16:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:37.762 00:28:37.762 real 5m52.352s 00:28:37.762 user 10m33.250s 00:28:37.762 sys 1m58.115s 00:28:37.762 16:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:37.762 16:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.762 ************************************ 00:28:37.762 END TEST nvmf_host 00:28:37.762 ************************************ 00:28:37.762 16:53:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:37.762 16:53:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:37.762 16:53:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:37.762 16:53:41 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:37.762 16:53:41 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.762 16:53:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.762 ************************************ 00:28:37.762 START TEST nvmf_target_core_interrupt_mode 00:28:37.762 ************************************ 00:28:37.762 16:53:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:37.762 * Looking for test storage... 00:28:37.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.762 --rc genhtml_branch_coverage=1 00:28:37.762 --rc genhtml_function_coverage=1 00:28:37.762 --rc genhtml_legend=1 00:28:37.762 --rc geninfo_all_blocks=1 00:28:37.762 --rc geninfo_unexecuted_blocks=1 00:28:37.762 00:28:37.762 ' 00:28:37.762 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.762 --rc genhtml_branch_coverage=1 00:28:37.763 --rc genhtml_function_coverage=1 00:28:37.763 --rc genhtml_legend=1 00:28:37.763 --rc geninfo_all_blocks=1 00:28:37.763 --rc geninfo_unexecuted_blocks=1 00:28:37.763 00:28:37.763 ' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.763 --rc genhtml_branch_coverage=1 00:28:37.763 --rc genhtml_function_coverage=1 00:28:37.763 --rc genhtml_legend=1 00:28:37.763 --rc geninfo_all_blocks=1 00:28:37.763 --rc geninfo_unexecuted_blocks=1 00:28:37.763 00:28:37.763 ' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.763 --rc genhtml_branch_coverage=1 00:28:37.763 --rc genhtml_function_coverage=1 00:28:37.763 --rc genhtml_legend=1 00:28:37.763 --rc geninfo_all_blocks=1 00:28:37.763 --rc geninfo_unexecuted_blocks=1 00:28:37.763 00:28:37.763 ' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.763 ************************************ 00:28:37.763 START TEST nvmf_abort 00:28:37.763 ************************************ 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:37.763 * Looking for test storage... 00:28:37.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.763 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:37.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.763 --rc genhtml_branch_coverage=1 00:28:37.764 --rc genhtml_function_coverage=1 00:28:37.764 --rc genhtml_legend=1 00:28:37.764 --rc geninfo_all_blocks=1 00:28:37.764 --rc geninfo_unexecuted_blocks=1 00:28:37.764 00:28:37.764 ' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:37.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.764 --rc genhtml_branch_coverage=1 00:28:37.764 --rc genhtml_function_coverage=1 00:28:37.764 --rc genhtml_legend=1 00:28:37.764 --rc geninfo_all_blocks=1 00:28:37.764 --rc geninfo_unexecuted_blocks=1 00:28:37.764 00:28:37.764 ' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:37.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.764 --rc genhtml_branch_coverage=1 00:28:37.764 --rc genhtml_function_coverage=1 00:28:37.764 --rc genhtml_legend=1 00:28:37.764 --rc geninfo_all_blocks=1 00:28:37.764 --rc geninfo_unexecuted_blocks=1 00:28:37.764 00:28:37.764 ' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:37.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.764 --rc genhtml_branch_coverage=1 00:28:37.764 --rc genhtml_function_coverage=1 00:28:37.764 --rc genhtml_legend=1 00:28:37.764 --rc geninfo_all_blocks=1 00:28:37.764 --rc geninfo_unexecuted_blocks=1 00:28:37.764 00:28:37.764 ' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.764 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.023 16:53:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.590 16:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:44.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:44.590 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:44.590 Found net devices under 0000:86:00.0: cvl_0_0 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:44.590 Found net devices under 0000:86:00.1: cvl_0_1 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:44.590 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:44.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:44.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:28:44.591 00:28:44.591 --- 10.0.0.2 ping statistics --- 00:28:44.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.591 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:44.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:44.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:28:44.591 00:28:44.591 --- 10.0.0.1 ping statistics --- 00:28:44.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:44.591 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=704688 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 704688 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 704688 ']' 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 [2024-10-14 16:53:48.350439] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:44.591 [2024-10-14 16:53:48.351341] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:28:44.591 [2024-10-14 16:53:48.351372] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.591 [2024-10-14 16:53:48.421541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.591 [2024-10-14 16:53:48.462954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.591 [2024-10-14 16:53:48.462986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.591 [2024-10-14 16:53:48.462993] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:44.591 [2024-10-14 16:53:48.462999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:44.591 [2024-10-14 16:53:48.463004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.591 [2024-10-14 16:53:48.464400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.591 [2024-10-14 16:53:48.464506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.591 [2024-10-14 16:53:48.464506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.591 [2024-10-14 16:53:48.530736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:44.591 [2024-10-14 16:53:48.531717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:44.591 [2024-10-14 16:53:48.532566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:44.591 [2024-10-14 16:53:48.532569] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 [2024-10-14 16:53:48.597303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 Malloc0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 Delay0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.591 [2024-10-14 16:53:48.685307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.591 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.592 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.592 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.592 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.592 16:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:44.592 [2024-10-14 16:53:48.759177] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:46.494 Initializing NVMe Controllers 00:28:46.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:46.494 controller IO queue size 128 less than required 00:28:46.494 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:46.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:46.494 Initialization complete. Launching workers. 00:28:46.494 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 38142 00:28:46.494 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38203, failed to submit 66 00:28:46.494 success 38142, unsuccessful 61, failed 0 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.494 rmmod nvme_tcp 00:28:46.494 rmmod nvme_fabrics 00:28:46.494 rmmod nvme_keyring 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 704688 ']' 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 704688 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 704688 ']' 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 704688 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 704688 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 704688' 00:28:46.494 killing process with pid 704688 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 704688 00:28:46.494 16:53:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 704688 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.494 16:53:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.028 00:28:49.028 real 0m10.953s 00:28:49.028 user 0m9.941s 00:28:49.028 sys 0m5.603s 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.028 ************************************ 00:28:49.028 END TEST nvmf_abort 00:28:49.028 ************************************ 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:49.028 ************************************ 00:28:49.028 START TEST nvmf_ns_hotplug_stress 00:28:49.028 ************************************ 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:49.028 * Looking for test storage... 00:28:49.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:49.028 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:49.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.029 --rc genhtml_branch_coverage=1 00:28:49.029 --rc genhtml_function_coverage=1 00:28:49.029 --rc genhtml_legend=1 00:28:49.029 --rc geninfo_all_blocks=1 00:28:49.029 --rc geninfo_unexecuted_blocks=1 00:28:49.029 00:28:49.029 ' 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:49.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.029 --rc genhtml_branch_coverage=1 00:28:49.029 --rc genhtml_function_coverage=1 00:28:49.029 --rc genhtml_legend=1 00:28:49.029 --rc geninfo_all_blocks=1 00:28:49.029 --rc geninfo_unexecuted_blocks=1 00:28:49.029 00:28:49.029 ' 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:49.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.029 --rc genhtml_branch_coverage=1 00:28:49.029 --rc genhtml_function_coverage=1 00:28:49.029 --rc genhtml_legend=1 00:28:49.029 --rc geninfo_all_blocks=1 00:28:49.029 --rc geninfo_unexecuted_blocks=1 00:28:49.029 00:28:49.029 ' 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:49.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.029 --rc genhtml_branch_coverage=1 00:28:49.029 --rc genhtml_function_coverage=1 00:28:49.029 --rc genhtml_legend=1 00:28:49.029 --rc geninfo_all_blocks=1 00:28:49.029 --rc geninfo_unexecuted_blocks=1 00:28:49.029 00:28:49.029 ' 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.029 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.030 16:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:55.597 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:55.597 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:55.597 Found net devices under 0000:86:00.0: cvl_0_0 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:55.597 Found net devices under 0000:86:00.1: cvl_0_1 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.597 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:28:55.598 00:28:55.598 --- 10.0.0.2 ping statistics --- 00:28:55.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.598 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:28:55.598 00:28:55.598 --- 10.0.0.1 ping statistics --- 00:28:55.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.598 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=708675 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 708675 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 708675 ']' 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:55.598 [2024-10-14 16:53:59.387813] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:55.598 [2024-10-14 16:53:59.388714] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:28:55.598 [2024-10-14 16:53:59.388745] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.598 [2024-10-14 16:53:59.461066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.598 [2024-10-14 16:53:59.502234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.598 [2024-10-14 16:53:59.502268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.598 [2024-10-14 16:53:59.502275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.598 [2024-10-14 16:53:59.502281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.598 [2024-10-14 16:53:59.502286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.598 [2024-10-14 16:53:59.503718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.598 [2024-10-14 16:53:59.503823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.598 [2024-10-14 16:53:59.503824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.598 [2024-10-14 16:53:59.569001] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:55.598 [2024-10-14 16:53:59.569964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:55.598 [2024-10-14 16:53:59.570280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:55.598 [2024-10-14 16:53:59.570435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:55.598 [2024-10-14 16:53:59.800467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.598 16:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:55.598 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.598 [2024-10-14 16:54:00.196940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.598 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:55.857 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:56.115 Malloc0 00:28:56.115 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:56.374 Delay0 00:28:56.374 16:54:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.374 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:56.632 NULL1 00:28:56.632 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:56.890 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=709031 00:28:56.890 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:56.890 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:28:56.890 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.148 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.406 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:57.406 16:54:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:57.406 true 00:28:57.406 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:28:57.406 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.664 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.922 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:57.922 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:58.180 true 00:28:58.180 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:28:58.180 16:54:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.555 Read completed with error (sct=0, sc=11) 00:28:59.555 16:54:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.555 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.555 16:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:59.556 16:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:59.814 true 00:28:59.814 16:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:28:59.814 16:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.838 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.838 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:00.838 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:00.838 true 00:29:00.838 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:00.838 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.097 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.355 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:01.355 16:54:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:01.614 true 00:29:01.614 16:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:01.614 16:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.549 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.808 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:02.808 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:03.065 true 00:29:03.065 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:03.065 16:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.000 16:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.000 16:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:04.000 16:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:04.258 true 00:29:04.258 16:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:04.258 16:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.516 16:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.775 16:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:04.775 16:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:04.775 true 00:29:04.775 16:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:04.775 16:54:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.152 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:06.152 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:06.152 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:06.411 true 00:29:06.411 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:06.411 16:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.347 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.347 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:07.347 16:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:07.605 true 00:29:07.605 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:07.605 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.864 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.123 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:08.123 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:08.123 true 00:29:08.382 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:08.382 16:54:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.318 16:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:09.576 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:09.576 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:09.834 true 00:29:09.834 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:09.834 16:54:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.769 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.769 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:10.769 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:11.027 true 00:29:11.027 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:11.027 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.286 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.286 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:11.286 16:54:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:11.545 true 00:29:11.545 16:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:11.545 16:54:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.482 16:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.741 16:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:12.741 16:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:12.999 true 00:29:12.999 16:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:12.999 16:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.934 16:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.934 16:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:13.934 16:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:14.193 true 00:29:14.193 16:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:14.193 16:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.451 16:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.708 16:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:14.708 16:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:14.708 true 00:29:14.967 16:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:14.967 16:54:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.903 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:15.903 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.161 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:16.161 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:16.161 true 00:29:16.161 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:16.161 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.420 16:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.678 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:16.678 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:16.937 true 00:29:16.937 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:16.937 16:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.873 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.873 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:18.130 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:18.130 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:18.389 true 00:29:18.389 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:18.389 16:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.326 16:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.326 16:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:19.326 16:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:19.585 true 00:29:19.585 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:19.585 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.843 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.102 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:20.102 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:20.102 true 00:29:20.102 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:20.102 16:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.478 16:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.478 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:21.478 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:21.735 true 00:29:21.735 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:21.735 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.993 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.993 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:21.993 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:22.251 true 00:29:22.251 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:22.251 16:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.626 16:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.626 16:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:23.626 16:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:23.884 true 00:29:23.884 16:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:23.884 16:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.820 16:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.079 16:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:25.079 16:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:25.079 true 00:29:25.079 16:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:25.079 16:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.337 16:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.597 16:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:25.597 16:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:25.597 true 00:29:25.856 16:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:25.856 16:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.793 16:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.052 16:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:27.052 16:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:27.312 true 00:29:27.312 16:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:27.312 16:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.312 Initializing NVMe Controllers 00:29:27.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.312 Controller IO queue size 128, less than required. 00:29:27.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.312 Controller IO queue size 128, less than required. 00:29:27.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:27.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:27.312 Initialization complete. Launching workers. 00:29:27.312 ======================================================== 00:29:27.312 Latency(us) 00:29:27.312 Device Information : IOPS MiB/s Average min max 00:29:27.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1920.32 0.94 43592.98 2542.41 1024245.52 00:29:27.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17379.88 8.49 7342.64 1107.42 304484.04 00:29:27.312 ======================================================== 00:29:27.312 Total : 19300.20 9.42 10949.45 1107.42 1024245.52 00:29:27.312 00:29:27.571 16:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.571 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:27.571 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:27.830 true 00:29:27.830 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 709031 00:29:27.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (709031) - No such process 00:29:27.830 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 709031 00:29:27.830 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.090 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:28.349 null0 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:28.349 16:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:28.608 null1 00:29:28.608 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:28.608 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:28.608 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:28.867 null2 00:29:28.867 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:28.867 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:28.867 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:28.867 null3 00:29:29.127 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:29.127 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:29.127 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:29.127 null4 00:29:29.127 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:29.127 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:29.127 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:29.386 null5 00:29:29.386 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:29.386 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:29.386 16:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:29.386 null6 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:29.646 null7 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:29.646 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 714808 714809 714811 714813 714815 714817 714819 714821 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:29.647 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:29.906 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.165 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:30.424 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.424 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:30.424 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:30.424 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:30.424 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:30.425 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:30.425 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:30.425 16:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.425 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:30.684 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.943 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:30.944 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:31.203 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.462 16:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.462 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.719 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:31.720 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:31.977 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.235 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.236 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.494 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.494 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.495 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.495 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.495 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.495 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.495 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.495 16:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.495 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:32.753 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.753 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.753 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:32.753 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:32.753 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:32.753 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:32.754 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.013 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.272 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.537 16:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:33.537 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:33.797 rmmod nvme_tcp 00:29:33.797 rmmod nvme_fabrics 00:29:33.797 rmmod nvme_keyring 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:33.797 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 708675 ']' 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 708675 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 708675 ']' 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 708675 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 708675 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 708675' 00:29:34.056 killing process with pid 708675 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 708675 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 708675 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.056 16:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:36.594 00:29:36.594 real 0m47.508s 00:29:36.594 user 2m57.853s 00:29:36.594 sys 0m20.748s 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:36.594 ************************************ 00:29:36.594 END TEST nvmf_ns_hotplug_stress 00:29:36.594 ************************************ 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:36.594 ************************************ 00:29:36.594 START TEST nvmf_delete_subsystem 00:29:36.594 ************************************ 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:36.594 * Looking for test storage... 00:29:36.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:36.594 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:36.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.595 --rc genhtml_branch_coverage=1 00:29:36.595 --rc genhtml_function_coverage=1 00:29:36.595 --rc genhtml_legend=1 00:29:36.595 --rc geninfo_all_blocks=1 00:29:36.595 --rc geninfo_unexecuted_blocks=1 00:29:36.595 00:29:36.595 ' 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:36.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.595 --rc genhtml_branch_coverage=1 00:29:36.595 --rc genhtml_function_coverage=1 00:29:36.595 --rc genhtml_legend=1 00:29:36.595 --rc geninfo_all_blocks=1 00:29:36.595 --rc geninfo_unexecuted_blocks=1 00:29:36.595 00:29:36.595 ' 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:36.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.595 --rc genhtml_branch_coverage=1 00:29:36.595 --rc genhtml_function_coverage=1 00:29:36.595 --rc genhtml_legend=1 00:29:36.595 --rc geninfo_all_blocks=1 00:29:36.595 --rc geninfo_unexecuted_blocks=1 00:29:36.595 00:29:36.595 ' 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:36.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:36.595 --rc genhtml_branch_coverage=1 00:29:36.595 --rc genhtml_function_coverage=1 00:29:36.595 --rc genhtml_legend=1 00:29:36.595 --rc geninfo_all_blocks=1 00:29:36.595 --rc geninfo_unexecuted_blocks=1 00:29:36.595 00:29:36.595 ' 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.595 16:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:36.595 16:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:43.166 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.166 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:43.167 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:43.167 Found net devices under 0000:86:00.0: cvl_0_0 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:43.167 Found net devices under 0000:86:00.1: cvl_0_1 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:29:43.167 00:29:43.167 --- 10.0.0.2 ping statistics --- 00:29:43.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.167 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:29:43.167 00:29:43.167 --- 10.0.0.1 ping statistics --- 00:29:43.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.167 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=719173 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 719173 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 719173 ']' 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.167 16:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.167 [2024-10-14 16:54:46.967738] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:43.167 [2024-10-14 16:54:46.968630] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:29:43.167 [2024-10-14 16:54:46.968661] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.167 [2024-10-14 16:54:47.040848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:43.167 [2024-10-14 16:54:47.081917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.167 [2024-10-14 16:54:47.081953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.167 [2024-10-14 16:54:47.081960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.168 [2024-10-14 16:54:47.081965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.168 [2024-10-14 16:54:47.081971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.168 [2024-10-14 16:54:47.083146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.168 [2024-10-14 16:54:47.083147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.168 [2024-10-14 16:54:47.149781] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:43.168 [2024-10-14 16:54:47.150290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:43.168 [2024-10-14 16:54:47.150560] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 [2024-10-14 16:54:47.215991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 [2024-10-14 16:54:47.244268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 NULL1 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 Delay0 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=719200 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:43.168 16:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:43.168 [2024-10-14 16:54:47.345823] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:45.072 16:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.072 16:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:45.072 16:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 starting I/O failed: -6 00:29:45.072 [2024-10-14 16:54:49.501905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7570 is same with the state(6) to be set 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.072 Read completed with error (sct=0, sc=8) 00:29:45.072 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 [2024-10-14 16:54:49.503264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7930 is same with the state(6) to be set 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 starting I/O failed: -6 00:29:45.073 [2024-10-14 16:54:49.507028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706c000c00 is same with the state(6) to be set 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Write completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:45.073 Read completed with error (sct=0, sc=8) 00:29:46.009 [2024-10-14 16:54:50.483620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a8a70 is same with the state(6) to be set 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 [2024-10-14 16:54:50.505022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7750 is same with the state(6) to be set 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 [2024-10-14 16:54:50.505667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a7390 is same with the state(6) to be set 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 [2024-10-14 16:54:50.508557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706c00d7c0 is same with the state(6) to be set 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Write completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 Read completed with error (sct=0, sc=8) 00:29:46.009 [2024-10-14 16:54:50.509200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f706c00cfe0 is same with the state(6) to be set 00:29:46.009 Initializing NVMe Controllers 00:29:46.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.009 Controller IO queue size 128, less than required. 00:29:46.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:46.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:46.009 Initialization complete. Launching workers. 00:29:46.009 ======================================================== 00:29:46.009 Latency(us) 00:29:46.009 Device Information : IOPS MiB/s Average min max 00:29:46.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.35 0.08 924117.42 1372.38 1006406.46 00:29:46.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.32 0.08 902367.51 255.04 1010631.37 00:29:46.009 ======================================================== 00:29:46.009 Total : 323.67 0.16 912941.31 255.04 1010631.37 00:29:46.009 00:29:46.009 [2024-10-14 16:54:50.509818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a8a70 (9): Bad file descriptor 00:29:46.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:46.009 16:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.009 16:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:46.009 16:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 719200 00:29:46.009 16:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 719200 00:29:46.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (719200) - No such process 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 719200 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 719200 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 719200 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.577 [2024-10-14 16:54:51.040199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=719883 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:46.577 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:46.577 [2024-10-14 16:54:51.112176] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:47.145 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:47.145 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:47.145 16:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:47.713 16:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:47.713 16:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:47.713 16:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:47.972 16:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:47.972 16:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:47.972 16:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:48.540 16:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:48.540 16:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:48.540 16:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:49.108 16:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:49.108 16:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:49.108 16:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:49.676 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:49.676 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:49.676 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:49.676 Initializing NVMe Controllers 00:29:49.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.676 Controller IO queue size 128, less than required. 00:29:49.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:49.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:49.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:49.676 Initialization complete. Launching workers. 00:29:49.676 ======================================================== 00:29:49.676 Latency(us) 00:29:49.676 Device Information : IOPS MiB/s Average min max 00:29:49.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002680.86 1000131.10 1043559.58 00:29:49.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003907.21 1000244.09 1009934.33 00:29:49.676 ======================================================== 00:29:49.676 Total : 256.00 0.12 1003294.04 1000131.10 1043559.58 00:29:49.676 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 719883 00:29:50.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (719883) - No such process 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 719883 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.243 rmmod nvme_tcp 00:29:50.243 rmmod nvme_fabrics 00:29:50.243 rmmod nvme_keyring 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 719173 ']' 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 719173 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 719173 ']' 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 719173 00:29:50.243 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 719173 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 719173' 00:29:50.244 killing process with pid 719173 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 719173 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 719173 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.244 16:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.910 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.910 00:29:52.910 real 0m16.128s 00:29:52.910 user 0m25.965s 00:29:52.910 sys 0m6.209s 00:29:52.910 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.910 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:52.910 ************************************ 00:29:52.910 END TEST nvmf_delete_subsystem 00:29:52.910 ************************************ 00:29:52.910 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:52.910 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:52.911 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.911 16:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:52.911 ************************************ 00:29:52.911 START TEST nvmf_host_management 00:29:52.911 ************************************ 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:52.911 * Looking for test storage... 00:29:52.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.911 --rc genhtml_branch_coverage=1 00:29:52.911 --rc genhtml_function_coverage=1 00:29:52.911 --rc genhtml_legend=1 00:29:52.911 --rc geninfo_all_blocks=1 00:29:52.911 --rc geninfo_unexecuted_blocks=1 00:29:52.911 00:29:52.911 ' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.911 --rc genhtml_branch_coverage=1 00:29:52.911 --rc genhtml_function_coverage=1 00:29:52.911 --rc genhtml_legend=1 00:29:52.911 --rc geninfo_all_blocks=1 00:29:52.911 --rc geninfo_unexecuted_blocks=1 00:29:52.911 00:29:52.911 ' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.911 --rc genhtml_branch_coverage=1 00:29:52.911 --rc genhtml_function_coverage=1 00:29:52.911 --rc genhtml_legend=1 00:29:52.911 --rc geninfo_all_blocks=1 00:29:52.911 --rc geninfo_unexecuted_blocks=1 00:29:52.911 00:29:52.911 ' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.911 --rc genhtml_branch_coverage=1 00:29:52.911 --rc genhtml_function_coverage=1 00:29:52.911 --rc genhtml_legend=1 00:29:52.911 --rc geninfo_all_blocks=1 00:29:52.911 --rc geninfo_unexecuted_blocks=1 00:29:52.911 00:29:52.911 ' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:52.911 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.912 16:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.480 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:59.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:59.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:59.481 Found net devices under 0000:86:00.0: cvl_0_0 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:59.481 Found net devices under 0000:86:00.1: cvl_0_1 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.481 16:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:29:59.481 00:29:59.481 --- 10.0.0.2 ping statistics --- 00:29:59.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.481 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:29:59.481 00:29:59.481 --- 10.0.0.1 ping statistics --- 00:29:59.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.481 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=723884 00:29:59.481 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 723884 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 723884 ']' 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.482 16:55:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.482 [2024-10-14 16:55:03.204330] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:59.482 [2024-10-14 16:55:03.205214] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:29:59.482 [2024-10-14 16:55:03.205247] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.482 [2024-10-14 16:55:03.279269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.482 [2024-10-14 16:55:03.322923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.482 [2024-10-14 16:55:03.322960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.482 [2024-10-14 16:55:03.322968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.482 [2024-10-14 16:55:03.322974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.482 [2024-10-14 16:55:03.322980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.482 [2024-10-14 16:55:03.327618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.482 [2024-10-14 16:55:03.327712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.482 [2024-10-14 16:55:03.327818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.482 [2024-10-14 16:55:03.327818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:59.482 [2024-10-14 16:55:03.394653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:59.482 [2024-10-14 16:55:03.395304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:59.482 [2024-10-14 16:55:03.396209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:59.482 [2024-10-14 16:55:03.396216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:59.482 [2024-10-14 16:55:03.396353] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.482 [2024-10-14 16:55:04.084458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.482 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.741 Malloc0 00:29:59.741 [2024-10-14 16:55:04.168659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=724149 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 724149 /var/tmp/bdevperf.sock 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 724149 ']' 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:59.741 { 00:29:59.741 "params": { 00:29:59.741 "name": "Nvme$subsystem", 00:29:59.741 "trtype": "$TEST_TRANSPORT", 00:29:59.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:59.741 "adrfam": "ipv4", 00:29:59.741 "trsvcid": "$NVMF_PORT", 00:29:59.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:59.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:59.741 "hdgst": ${hdgst:-false}, 00:29:59.741 "ddgst": ${ddgst:-false} 00:29:59.741 }, 00:29:59.741 "method": "bdev_nvme_attach_controller" 00:29:59.741 } 00:29:59.741 EOF 00:29:59.741 )") 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:29:59.741 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:59.741 "params": { 00:29:59.741 "name": "Nvme0", 00:29:59.741 "trtype": "tcp", 00:29:59.741 "traddr": "10.0.0.2", 00:29:59.741 "adrfam": "ipv4", 00:29:59.741 "trsvcid": "4420", 00:29:59.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:59.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:59.741 "hdgst": false, 00:29:59.741 "ddgst": false 00:29:59.741 }, 00:29:59.741 "method": "bdev_nvme_attach_controller" 00:29:59.741 }' 00:29:59.741 [2024-10-14 16:55:04.262892] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:29:59.741 [2024-10-14 16:55:04.262938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724149 ] 00:29:59.741 [2024-10-14 16:55:04.332572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.741 [2024-10-14 16:55:04.373254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.309 Running I/O for 10 seconds... 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:30:00.309 16:55:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:00.570 [2024-10-14 16:55:05.092389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.092433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.092441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.092448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.092454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.092460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.092466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e6f80 is same with the state(6) to be set 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.570 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:00.570 [2024-10-14 16:55:05.103623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.570 [2024-10-14 16:55:05.103654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.570 [2024-10-14 16:55:05.103670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.570 [2024-10-14 16:55:05.103684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:00.570 [2024-10-14 16:55:05.103698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf85c0 is same with the state(6) to be set 00:30:00.570 [2024-10-14 16:55:05.103871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.103990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.103997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.570 [2024-10-14 16:55:05.104162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.570 [2024-10-14 16:55:05.104170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.571 [2024-10-14 16:55:05.104753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.571 [2024-10-14 16:55:05.104759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.572 [2024-10-14 16:55:05.104768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.572 [2024-10-14 16:55:05.104774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.572 [2024-10-14 16:55:05.104782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.572 [2024-10-14 16:55:05.104788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.572 [2024-10-14 16:55:05.104796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.572 [2024-10-14 16:55:05.104802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.572 [2024-10-14 16:55:05.104810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.572 [2024-10-14 16:55:05.104816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:00.572 [2024-10-14 16:55:05.104878] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe11850 was disconnected and freed. reset controller. 00:30:00.572 [2024-10-14 16:55:05.105762] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:00.572 task offset: 106496 on job bdev=Nvme0n1 fails 00:30:00.572 00:30:00.572 Latency(us) 00:30:00.572 [2024-10-14T14:55:05.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.572 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:00.572 Job: Nvme0n1 ended in about 0.41 seconds with error 00:30:00.572 Verification LBA range: start 0x0 length 0x400 00:30:00.572 Nvme0n1 : 0.41 2014.53 125.91 154.96 0.00 28723.01 1716.42 26838.55 00:30:00.572 [2024-10-14T14:55:05.206Z] =================================================================================================================== 00:30:00.572 [2024-10-14T14:55:05.206Z] Total : 2014.53 125.91 154.96 0.00 28723.01 1716.42 26838.55 00:30:00.572 [2024-10-14 16:55:05.108108] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:00.572 [2024-10-14 16:55:05.108132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf85c0 (9): Bad file descriptor 00:30:00.572 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.572 16:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:00.572 [2024-10-14 16:55:05.200799] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 724149 00:30:01.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (724149) - No such process 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:01.508 { 00:30:01.508 "params": { 00:30:01.508 "name": "Nvme$subsystem", 00:30:01.508 "trtype": "$TEST_TRANSPORT", 00:30:01.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:01.508 "adrfam": "ipv4", 00:30:01.508 "trsvcid": "$NVMF_PORT", 00:30:01.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:01.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:01.508 "hdgst": ${hdgst:-false}, 00:30:01.508 "ddgst": ${ddgst:-false} 00:30:01.508 }, 00:30:01.508 "method": "bdev_nvme_attach_controller" 00:30:01.508 } 00:30:01.508 EOF 00:30:01.508 )") 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:01.508 16:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:01.508 "params": { 00:30:01.508 "name": "Nvme0", 00:30:01.508 "trtype": "tcp", 00:30:01.508 "traddr": "10.0.0.2", 00:30:01.508 "adrfam": "ipv4", 00:30:01.508 "trsvcid": "4420", 00:30:01.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:01.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:01.508 "hdgst": false, 00:30:01.508 "ddgst": false 00:30:01.508 }, 00:30:01.508 "method": "bdev_nvme_attach_controller" 00:30:01.508 }' 00:30:01.767 [2024-10-14 16:55:06.165067] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:30:01.767 [2024-10-14 16:55:06.165119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724400 ] 00:30:01.767 [2024-10-14 16:55:06.233440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.767 [2024-10-14 16:55:06.271646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.025 Running I/O for 1 seconds... 00:30:02.961 1984.00 IOPS, 124.00 MiB/s 00:30:02.961 Latency(us) 00:30:02.961 [2024-10-14T14:55:07.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.961 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:02.961 Verification LBA range: start 0x0 length 0x400 00:30:02.961 Nvme0n1 : 1.01 2023.82 126.49 0.00 0.00 31119.12 6772.05 27213.04 00:30:02.961 [2024-10-14T14:55:07.595Z] =================================================================================================================== 00:30:02.961 [2024-10-14T14:55:07.595Z] Total : 2023.82 126.49 0.00 0.00 31119.12 6772.05 27213.04 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.220 rmmod nvme_tcp 00:30:03.220 rmmod nvme_fabrics 00:30:03.220 rmmod nvme_keyring 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 723884 ']' 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 723884 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 723884 ']' 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 723884 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 723884 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 723884' 00:30:03.220 killing process with pid 723884 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 723884 00:30:03.220 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 723884 00:30:03.479 [2024-10-14 16:55:07.915775] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.479 16:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.382 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.383 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:05.383 00:30:05.383 real 0m13.009s 00:30:05.383 user 0m18.245s 00:30:05.383 sys 0m6.403s 00:30:05.383 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:05.383 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:05.383 ************************************ 00:30:05.383 END TEST nvmf_host_management 00:30:05.383 ************************************ 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.642 ************************************ 00:30:05.642 START TEST nvmf_lvol 00:30:05.642 ************************************ 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:05.642 * Looking for test storage... 00:30:05.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.642 --rc genhtml_branch_coverage=1 00:30:05.642 --rc genhtml_function_coverage=1 00:30:05.642 --rc genhtml_legend=1 00:30:05.642 --rc geninfo_all_blocks=1 00:30:05.642 --rc geninfo_unexecuted_blocks=1 00:30:05.642 00:30:05.642 ' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.642 --rc genhtml_branch_coverage=1 00:30:05.642 --rc genhtml_function_coverage=1 00:30:05.642 --rc genhtml_legend=1 00:30:05.642 --rc geninfo_all_blocks=1 00:30:05.642 --rc geninfo_unexecuted_blocks=1 00:30:05.642 00:30:05.642 ' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.642 --rc genhtml_branch_coverage=1 00:30:05.642 --rc genhtml_function_coverage=1 00:30:05.642 --rc genhtml_legend=1 00:30:05.642 --rc geninfo_all_blocks=1 00:30:05.642 --rc geninfo_unexecuted_blocks=1 00:30:05.642 00:30:05.642 ' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:05.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.642 --rc genhtml_branch_coverage=1 00:30:05.642 --rc genhtml_function_coverage=1 00:30:05.642 --rc genhtml_legend=1 00:30:05.642 --rc geninfo_all_blocks=1 00:30:05.642 --rc geninfo_unexecuted_blocks=1 00:30:05.642 00:30:05.642 ' 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.642 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.902 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.903 16:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:12.468 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:12.468 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:12.468 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:12.469 Found net devices under 0000:86:00.0: cvl_0_0 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:12.469 Found net devices under 0000:86:00.1: cvl_0_1 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.469 16:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:12.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:30:12.469 00:30:12.469 --- 10.0.0.2 ping statistics --- 00:30:12.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.469 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:30:12.469 00:30:12.469 --- 10.0.0.1 ping statistics --- 00:30:12.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.469 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=728158 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 728158 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 728158 ']' 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:12.469 [2024-10-14 16:55:16.264010] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.469 [2024-10-14 16:55:16.264886] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:30:12.469 [2024-10-14 16:55:16.264916] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.469 [2024-10-14 16:55:16.338238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:12.469 [2024-10-14 16:55:16.379387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.469 [2024-10-14 16:55:16.379426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.469 [2024-10-14 16:55:16.379433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.469 [2024-10-14 16:55:16.379439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.469 [2024-10-14 16:55:16.379444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.469 [2024-10-14 16:55:16.384620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.469 [2024-10-14 16:55:16.384650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.469 [2024-10-14 16:55:16.384652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.469 [2024-10-14 16:55:16.450379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.469 [2024-10-14 16:55:16.451257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:12.469 [2024-10-14 16:55:16.451791] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.469 [2024-10-14 16:55:16.451887] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:12.469 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:12.470 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:12.470 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.470 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:12.470 [2024-10-14 16:55:16.693382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.470 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:12.470 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:12.470 16:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:12.728 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:12.728 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:12.987 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:12.987 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=254e0f77-f97f-44f1-a4b1-2a550febc503 00:30:12.987 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 254e0f77-f97f-44f1-a4b1-2a550febc503 lvol 20 00:30:13.245 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b124c67e-d0be-4330-9282-6cba6a4071e8 00:30:13.245 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:13.510 16:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b124c67e-d0be-4330-9282-6cba6a4071e8 00:30:13.510 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:13.767 [2024-10-14 16:55:18.289315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.767 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.025 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=728602 00:30:14.025 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:14.025 16:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:14.960 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b124c67e-d0be-4330-9282-6cba6a4071e8 MY_SNAPSHOT 00:30:15.218 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=52648a17-03f3-435b-92bd-ec7e24f189bb 00:30:15.218 16:55:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b124c67e-d0be-4330-9282-6cba6a4071e8 30 00:30:15.475 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 52648a17-03f3-435b-92bd-ec7e24f189bb MY_CLONE 00:30:15.733 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0bbd8f77-8574-4a25-b877-8028c85a847b 00:30:15.733 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0bbd8f77-8574-4a25-b877-8028c85a847b 00:30:16.299 16:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 728602 00:30:24.410 Initializing NVMe Controllers 00:30:24.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:24.410 Controller IO queue size 128, less than required. 00:30:24.410 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:24.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:24.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:24.410 Initialization complete. Launching workers. 00:30:24.410 ======================================================== 00:30:24.410 Latency(us) 00:30:24.410 Device Information : IOPS MiB/s Average min max 00:30:24.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12370.20 48.32 10349.70 1437.18 72690.95 00:30:24.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12228.90 47.77 10466.40 3588.22 50800.25 00:30:24.410 ======================================================== 00:30:24.410 Total : 24599.10 96.09 10407.72 1437.18 72690.95 00:30:24.410 00:30:24.410 16:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:24.410 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b124c67e-d0be-4330-9282-6cba6a4071e8 00:30:24.669 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 254e0f77-f97f-44f1-a4b1-2a550febc503 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.928 rmmod nvme_tcp 00:30:24.928 rmmod nvme_fabrics 00:30:24.928 rmmod nvme_keyring 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 728158 ']' 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 728158 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 728158 ']' 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 728158 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 728158 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 728158' 00:30:24.928 killing process with pid 728158 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 728158 00:30:24.928 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 728158 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.187 16:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:27.719 00:30:27.719 real 0m21.708s 00:30:27.719 user 0m55.438s 00:30:27.719 sys 0m9.598s 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:27.719 ************************************ 00:30:27.719 END TEST nvmf_lvol 00:30:27.719 ************************************ 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:27.719 ************************************ 00:30:27.719 START TEST nvmf_lvs_grow 00:30:27.719 ************************************ 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:27.719 * Looking for test storage... 00:30:27.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:30:27.719 16:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:27.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.719 --rc genhtml_branch_coverage=1 00:30:27.719 --rc genhtml_function_coverage=1 00:30:27.719 --rc genhtml_legend=1 00:30:27.719 --rc geninfo_all_blocks=1 00:30:27.719 --rc geninfo_unexecuted_blocks=1 00:30:27.719 00:30:27.719 ' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:27.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.719 --rc genhtml_branch_coverage=1 00:30:27.719 --rc genhtml_function_coverage=1 00:30:27.719 --rc genhtml_legend=1 00:30:27.719 --rc geninfo_all_blocks=1 00:30:27.719 --rc geninfo_unexecuted_blocks=1 00:30:27.719 00:30:27.719 ' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:27.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.719 --rc genhtml_branch_coverage=1 00:30:27.719 --rc genhtml_function_coverage=1 00:30:27.719 --rc genhtml_legend=1 00:30:27.719 --rc geninfo_all_blocks=1 00:30:27.719 --rc geninfo_unexecuted_blocks=1 00:30:27.719 00:30:27.719 ' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:27.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.719 --rc genhtml_branch_coverage=1 00:30:27.719 --rc genhtml_function_coverage=1 00:30:27.719 --rc genhtml_legend=1 00:30:27.719 --rc geninfo_all_blocks=1 00:30:27.719 --rc geninfo_unexecuted_blocks=1 00:30:27.719 00:30:27.719 ' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.719 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:27.720 16:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:34.286 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.286 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:34.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:34.287 Found net devices under 0000:86:00.0: cvl_0_0 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:34.287 Found net devices under 0000:86:00.1: cvl_0_1 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:34.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:30:34.287 00:30:34.287 --- 10.0.0.2 ping statistics --- 00:30:34.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.287 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:30:34.287 16:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:30:34.287 00:30:34.287 --- 10.0.0.1 ping statistics --- 00:30:34.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.287 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=733778 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 733778 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 733778 ']' 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:34.287 [2024-10-14 16:55:38.104353] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:34.287 [2024-10-14 16:55:38.105261] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:30:34.287 [2024-10-14 16:55:38.105292] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.287 [2024-10-14 16:55:38.176510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.287 [2024-10-14 16:55:38.217080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.287 [2024-10-14 16:55:38.217114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.287 [2024-10-14 16:55:38.217120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.287 [2024-10-14 16:55:38.217126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.287 [2024-10-14 16:55:38.217131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.287 [2024-10-14 16:55:38.217654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.287 [2024-10-14 16:55:38.282635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:34.287 [2024-10-14 16:55:38.282842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.287 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:34.288 [2024-10-14 16:55:38.518331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:34.288 ************************************ 00:30:34.288 START TEST lvs_grow_clean 00:30:34.288 ************************************ 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:34.288 16:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:34.546 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:34.546 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:34.546 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:34.805 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:34.805 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:34.805 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc lvol 150 00:30:34.805 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2ec50f3-30f0-4873-a2aa-48aebcd25f85 00:30:34.805 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:34.805 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:35.064 [2024-10-14 16:55:39.574032] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:35.064 [2024-10-14 16:55:39.574159] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:35.064 true 00:30:35.064 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:35.064 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:35.322 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:35.322 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:35.322 16:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2ec50f3-30f0-4873-a2aa-48aebcd25f85 00:30:35.580 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:35.839 [2024-10-14 16:55:40.334486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.839 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=734275 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 734275 /var/tmp/bdevperf.sock 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 734275 ']' 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:36.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:36.095 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:36.095 [2024-10-14 16:55:40.599757] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:30:36.095 [2024-10-14 16:55:40.599806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734275 ] 00:30:36.095 [2024-10-14 16:55:40.669222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.095 [2024-10-14 16:55:40.710789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.352 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:36.352 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:30:36.352 16:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:36.610 Nvme0n1 00:30:36.610 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:36.868 [ 00:30:36.868 { 00:30:36.868 "name": "Nvme0n1", 00:30:36.868 "aliases": [ 00:30:36.868 "d2ec50f3-30f0-4873-a2aa-48aebcd25f85" 00:30:36.868 ], 00:30:36.868 "product_name": "NVMe disk", 00:30:36.868 "block_size": 4096, 00:30:36.868 "num_blocks": 38912, 00:30:36.868 "uuid": "d2ec50f3-30f0-4873-a2aa-48aebcd25f85", 00:30:36.868 "numa_id": 1, 00:30:36.868 "assigned_rate_limits": { 00:30:36.868 "rw_ios_per_sec": 0, 00:30:36.868 "rw_mbytes_per_sec": 0, 00:30:36.868 "r_mbytes_per_sec": 0, 00:30:36.868 "w_mbytes_per_sec": 0 00:30:36.868 }, 00:30:36.868 "claimed": false, 00:30:36.868 "zoned": false, 00:30:36.868 "supported_io_types": { 00:30:36.868 "read": true, 00:30:36.869 "write": true, 00:30:36.869 "unmap": true, 00:30:36.869 "flush": true, 00:30:36.869 "reset": true, 00:30:36.869 "nvme_admin": true, 00:30:36.869 "nvme_io": true, 00:30:36.869 "nvme_io_md": false, 00:30:36.869 "write_zeroes": true, 00:30:36.869 "zcopy": false, 00:30:36.869 "get_zone_info": false, 00:30:36.869 "zone_management": false, 00:30:36.869 "zone_append": false, 00:30:36.869 "compare": true, 00:30:36.869 "compare_and_write": true, 00:30:36.869 "abort": true, 00:30:36.869 "seek_hole": false, 00:30:36.869 "seek_data": false, 00:30:36.869 "copy": true, 00:30:36.869 "nvme_iov_md": false 00:30:36.869 }, 00:30:36.869 "memory_domains": [ 00:30:36.869 { 00:30:36.869 "dma_device_id": "system", 00:30:36.869 "dma_device_type": 1 00:30:36.869 } 00:30:36.869 ], 00:30:36.869 "driver_specific": { 00:30:36.869 "nvme": [ 00:30:36.869 { 00:30:36.869 "trid": { 00:30:36.869 "trtype": "TCP", 00:30:36.869 "adrfam": "IPv4", 00:30:36.869 "traddr": "10.0.0.2", 00:30:36.869 "trsvcid": "4420", 00:30:36.869 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:36.869 }, 00:30:36.869 "ctrlr_data": { 00:30:36.869 "cntlid": 1, 00:30:36.869 "vendor_id": "0x8086", 00:30:36.869 "model_number": "SPDK bdev Controller", 00:30:36.869 "serial_number": "SPDK0", 00:30:36.869 "firmware_revision": "25.01", 00:30:36.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:36.869 "oacs": { 00:30:36.869 "security": 0, 00:30:36.869 "format": 0, 00:30:36.869 "firmware": 0, 00:30:36.869 "ns_manage": 0 00:30:36.869 }, 00:30:36.869 "multi_ctrlr": true, 00:30:36.869 "ana_reporting": false 00:30:36.869 }, 00:30:36.869 "vs": { 00:30:36.869 "nvme_version": "1.3" 00:30:36.869 }, 00:30:36.869 "ns_data": { 00:30:36.869 "id": 1, 00:30:36.869 "can_share": true 00:30:36.869 } 00:30:36.869 } 00:30:36.869 ], 00:30:36.869 "mp_policy": "active_passive" 00:30:36.869 } 00:30:36.869 } 00:30:36.869 ] 00:30:36.869 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=734311 00:30:36.869 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:36.869 16:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:36.869 Running I/O for 10 seconds... 00:30:38.245 Latency(us) 00:30:38.245 [2024-10-14T14:55:42.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.245 Nvme0n1 : 1.00 22640.00 88.44 0.00 0.00 0.00 0.00 0.00 00:30:38.245 [2024-10-14T14:55:42.879Z] =================================================================================================================== 00:30:38.245 [2024-10-14T14:55:42.879Z] Total : 22640.00 88.44 0.00 0.00 0.00 0.00 0.00 00:30:38.245 00:30:38.811 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:39.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.070 Nvme0n1 : 2.00 22982.00 89.77 0.00 0.00 0.00 0.00 0.00 00:30:39.070 [2024-10-14T14:55:43.704Z] =================================================================================================================== 00:30:39.070 [2024-10-14T14:55:43.705Z] Total : 22982.00 89.77 0.00 0.00 0.00 0.00 0.00 00:30:39.071 00:30:39.071 true 00:30:39.071 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:39.071 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:39.329 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:39.329 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:39.329 16:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 734311 00:30:39.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:39.896 Nvme0n1 : 3.00 23095.00 90.21 0.00 0.00 0.00 0.00 0.00 00:30:39.896 [2024-10-14T14:55:44.530Z] =================================================================================================================== 00:30:39.896 [2024-10-14T14:55:44.530Z] Total : 23095.00 90.21 0.00 0.00 0.00 0.00 0.00 00:30:39.896 00:30:41.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:41.272 Nvme0n1 : 4.00 23184.75 90.57 0.00 0.00 0.00 0.00 0.00 00:30:41.272 [2024-10-14T14:55:45.906Z] =================================================================================================================== 00:30:41.272 [2024-10-14T14:55:45.906Z] Total : 23184.75 90.57 0.00 0.00 0.00 0.00 0.00 00:30:41.272 00:30:42.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.209 Nvme0n1 : 5.00 23224.60 90.72 0.00 0.00 0.00 0.00 0.00 00:30:42.209 [2024-10-14T14:55:46.843Z] =================================================================================================================== 00:30:42.209 [2024-10-14T14:55:46.843Z] Total : 23224.60 90.72 0.00 0.00 0.00 0.00 0.00 00:30:42.209 00:30:43.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.145 Nvme0n1 : 6.00 23252.83 90.83 0.00 0.00 0.00 0.00 0.00 00:30:43.145 [2024-10-14T14:55:47.779Z] =================================================================================================================== 00:30:43.145 [2024-10-14T14:55:47.779Z] Total : 23252.83 90.83 0.00 0.00 0.00 0.00 0.00 00:30:43.145 00:30:44.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.081 Nvme0n1 : 7.00 23289.43 90.97 0.00 0.00 0.00 0.00 0.00 00:30:44.081 [2024-10-14T14:55:48.715Z] =================================================================================================================== 00:30:44.081 [2024-10-14T14:55:48.715Z] Total : 23289.43 90.97 0.00 0.00 0.00 0.00 0.00 00:30:44.081 00:30:45.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.016 Nvme0n1 : 8.00 23290.12 90.98 0.00 0.00 0.00 0.00 0.00 00:30:45.016 [2024-10-14T14:55:49.650Z] =================================================================================================================== 00:30:45.016 [2024-10-14T14:55:49.650Z] Total : 23290.12 90.98 0.00 0.00 0.00 0.00 0.00 00:30:45.016 00:30:45.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.953 Nvme0n1 : 9.00 23288.67 90.97 0.00 0.00 0.00 0.00 0.00 00:30:45.953 [2024-10-14T14:55:50.587Z] =================================================================================================================== 00:30:45.953 [2024-10-14T14:55:50.587Z] Total : 23288.67 90.97 0.00 0.00 0.00 0.00 0.00 00:30:45.953 00:30:46.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.925 Nvme0n1 : 10.00 23318.70 91.09 0.00 0.00 0.00 0.00 0.00 00:30:46.925 [2024-10-14T14:55:51.559Z] =================================================================================================================== 00:30:46.925 [2024-10-14T14:55:51.559Z] Total : 23318.70 91.09 0.00 0.00 0.00 0.00 0.00 00:30:46.925 00:30:46.925 00:30:46.925 Latency(us) 00:30:46.925 [2024-10-14T14:55:51.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.925 Nvme0n1 : 10.01 23318.93 91.09 0.00 0.00 5486.16 3136.37 27213.04 00:30:46.925 [2024-10-14T14:55:51.559Z] =================================================================================================================== 00:30:46.925 [2024-10-14T14:55:51.559Z] Total : 23318.93 91.09 0.00 0.00 5486.16 3136.37 27213.04 00:30:46.925 { 00:30:46.925 "results": [ 00:30:46.925 { 00:30:46.925 "job": "Nvme0n1", 00:30:46.925 "core_mask": "0x2", 00:30:46.925 "workload": "randwrite", 00:30:46.925 "status": "finished", 00:30:46.925 "queue_depth": 128, 00:30:46.925 "io_size": 4096, 00:30:46.925 "runtime": 10.005391, 00:30:46.925 "iops": 23318.92876550252, 00:30:46.925 "mibps": 91.08956549024421, 00:30:46.925 "io_failed": 0, 00:30:46.925 "io_timeout": 0, 00:30:46.925 "avg_latency_us": 5486.164467306104, 00:30:46.925 "min_latency_us": 3136.365714285714, 00:30:46.925 "max_latency_us": 27213.04380952381 00:30:46.925 } 00:30:46.925 ], 00:30:46.925 "core_count": 1 00:30:46.925 } 00:30:46.925 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 734275 00:30:46.925 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 734275 ']' 00:30:46.925 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 734275 00:30:46.925 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:30:46.925 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:46.925 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 734275 00:30:47.184 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:47.184 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:47.184 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 734275' 00:30:47.184 killing process with pid 734275 00:30:47.184 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 734275 00:30:47.184 Received shutdown signal, test time was about 10.000000 seconds 00:30:47.184 00:30:47.184 Latency(us) 00:30:47.184 [2024-10-14T14:55:51.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.184 [2024-10-14T14:55:51.818Z] =================================================================================================================== 00:30:47.184 [2024-10-14T14:55:51.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:47.184 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 734275 00:30:47.184 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.442 16:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.703 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:47.703 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:47.979 [2024-10-14 16:55:52.514099] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:47.979 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:48.274 request: 00:30:48.274 { 00:30:48.274 "uuid": "8df466e3-fb4c-4fc4-8c17-635b1dfd54fc", 00:30:48.274 "method": "bdev_lvol_get_lvstores", 00:30:48.274 "req_id": 1 00:30:48.274 } 00:30:48.274 Got JSON-RPC error response 00:30:48.274 response: 00:30:48.274 { 00:30:48.274 "code": -19, 00:30:48.274 "message": "No such device" 00:30:48.274 } 00:30:48.274 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:30:48.274 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:48.274 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:48.274 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:48.274 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:48.532 aio_bdev 00:30:48.532 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d2ec50f3-30f0-4873-a2aa-48aebcd25f85 00:30:48.532 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=d2ec50f3-30f0-4873-a2aa-48aebcd25f85 00:30:48.532 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:48.533 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:30:48.533 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:48.533 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:48.533 16:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:48.533 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2ec50f3-30f0-4873-a2aa-48aebcd25f85 -t 2000 00:30:48.791 [ 00:30:48.791 { 00:30:48.791 "name": "d2ec50f3-30f0-4873-a2aa-48aebcd25f85", 00:30:48.791 "aliases": [ 00:30:48.791 "lvs/lvol" 00:30:48.791 ], 00:30:48.791 "product_name": "Logical Volume", 00:30:48.791 "block_size": 4096, 00:30:48.791 "num_blocks": 38912, 00:30:48.791 "uuid": "d2ec50f3-30f0-4873-a2aa-48aebcd25f85", 00:30:48.791 "assigned_rate_limits": { 00:30:48.791 "rw_ios_per_sec": 0, 00:30:48.791 "rw_mbytes_per_sec": 0, 00:30:48.791 "r_mbytes_per_sec": 0, 00:30:48.791 "w_mbytes_per_sec": 0 00:30:48.791 }, 00:30:48.791 "claimed": false, 00:30:48.791 "zoned": false, 00:30:48.791 "supported_io_types": { 00:30:48.791 "read": true, 00:30:48.791 "write": true, 00:30:48.791 "unmap": true, 00:30:48.791 "flush": false, 00:30:48.791 "reset": true, 00:30:48.791 "nvme_admin": false, 00:30:48.791 "nvme_io": false, 00:30:48.791 "nvme_io_md": false, 00:30:48.791 "write_zeroes": true, 00:30:48.791 "zcopy": false, 00:30:48.791 "get_zone_info": false, 00:30:48.791 "zone_management": false, 00:30:48.791 "zone_append": false, 00:30:48.791 "compare": false, 00:30:48.791 "compare_and_write": false, 00:30:48.791 "abort": false, 00:30:48.791 "seek_hole": true, 00:30:48.791 "seek_data": true, 00:30:48.791 "copy": false, 00:30:48.791 "nvme_iov_md": false 00:30:48.791 }, 00:30:48.791 "driver_specific": { 00:30:48.791 "lvol": { 00:30:48.791 "lvol_store_uuid": "8df466e3-fb4c-4fc4-8c17-635b1dfd54fc", 00:30:48.791 "base_bdev": "aio_bdev", 00:30:48.791 "thin_provision": false, 00:30:48.791 "num_allocated_clusters": 38, 00:30:48.791 "snapshot": false, 00:30:48.791 "clone": false, 00:30:48.791 "esnap_clone": false 00:30:48.791 } 00:30:48.791 } 00:30:48.791 } 00:30:48.791 ] 00:30:48.791 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:30:48.791 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:48.791 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:49.049 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:49.049 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:49.049 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:49.307 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:49.307 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2ec50f3-30f0-4873-a2aa-48aebcd25f85 00:30:49.307 16:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8df466e3-fb4c-4fc4-8c17-635b1dfd54fc 00:30:49.565 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:49.824 00:30:49.824 real 0m15.769s 00:30:49.824 user 0m15.258s 00:30:49.824 sys 0m1.536s 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 ************************************ 00:30:49.824 END TEST lvs_grow_clean 00:30:49.824 ************************************ 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:49.824 ************************************ 00:30:49.824 START TEST lvs_grow_dirty 00:30:49.824 ************************************ 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:49.824 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:50.083 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:50.083 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:50.342 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=222e45a3-06a0-42b0-85c3-10f29f40ac01 00:30:50.342 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:30:50.342 16:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:50.601 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:50.601 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:50.601 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 lvol 150 00:30:50.859 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:30:50.859 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:50.859 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:50.859 [2024-10-14 16:55:55.414034] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:50.859 [2024-10-14 16:55:55.414160] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:50.859 true 00:30:50.859 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:30:50.859 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:51.118 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:51.118 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:51.376 16:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:30:51.376 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.634 [2024-10-14 16:55:56.175216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.634 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=736857 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 736857 /var/tmp/bdevperf.sock 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 736857 ']' 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:51.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.893 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:51.893 [2024-10-14 16:55:56.422946] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:30:51.893 [2024-10-14 16:55:56.422993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736857 ] 00:30:51.893 [2024-10-14 16:55:56.492491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.152 [2024-10-14 16:55:56.536076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.152 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:52.152 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:30:52.152 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:52.411 Nvme0n1 00:30:52.411 16:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:52.670 [ 00:30:52.670 { 00:30:52.670 "name": "Nvme0n1", 00:30:52.670 "aliases": [ 00:30:52.670 "04e498b0-33f4-4ea7-884f-559c71a0e9a2" 00:30:52.670 ], 00:30:52.670 "product_name": "NVMe disk", 00:30:52.670 "block_size": 4096, 00:30:52.670 "num_blocks": 38912, 00:30:52.670 "uuid": "04e498b0-33f4-4ea7-884f-559c71a0e9a2", 00:30:52.670 "numa_id": 1, 00:30:52.670 "assigned_rate_limits": { 00:30:52.670 "rw_ios_per_sec": 0, 00:30:52.670 "rw_mbytes_per_sec": 0, 00:30:52.670 "r_mbytes_per_sec": 0, 00:30:52.670 "w_mbytes_per_sec": 0 00:30:52.670 }, 00:30:52.670 "claimed": false, 00:30:52.670 "zoned": false, 00:30:52.670 "supported_io_types": { 00:30:52.671 "read": true, 00:30:52.671 "write": true, 00:30:52.671 "unmap": true, 00:30:52.671 "flush": true, 00:30:52.671 "reset": true, 00:30:52.671 "nvme_admin": true, 00:30:52.671 "nvme_io": true, 00:30:52.671 "nvme_io_md": false, 00:30:52.671 "write_zeroes": true, 00:30:52.671 "zcopy": false, 00:30:52.671 "get_zone_info": false, 00:30:52.671 "zone_management": false, 00:30:52.671 "zone_append": false, 00:30:52.671 "compare": true, 00:30:52.671 "compare_and_write": true, 00:30:52.671 "abort": true, 00:30:52.671 "seek_hole": false, 00:30:52.671 "seek_data": false, 00:30:52.671 "copy": true, 00:30:52.671 "nvme_iov_md": false 00:30:52.671 }, 00:30:52.671 "memory_domains": [ 00:30:52.671 { 00:30:52.671 "dma_device_id": "system", 00:30:52.671 "dma_device_type": 1 00:30:52.671 } 00:30:52.671 ], 00:30:52.671 "driver_specific": { 00:30:52.671 "nvme": [ 00:30:52.671 { 00:30:52.671 "trid": { 00:30:52.671 "trtype": "TCP", 00:30:52.671 "adrfam": "IPv4", 00:30:52.671 "traddr": "10.0.0.2", 00:30:52.671 "trsvcid": "4420", 00:30:52.671 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:52.671 }, 00:30:52.671 "ctrlr_data": { 00:30:52.671 "cntlid": 1, 00:30:52.671 "vendor_id": "0x8086", 00:30:52.671 "model_number": "SPDK bdev Controller", 00:30:52.671 "serial_number": "SPDK0", 00:30:52.671 "firmware_revision": "25.01", 00:30:52.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:52.671 "oacs": { 00:30:52.671 "security": 0, 00:30:52.671 "format": 0, 00:30:52.671 "firmware": 0, 00:30:52.671 "ns_manage": 0 00:30:52.671 }, 00:30:52.671 "multi_ctrlr": true, 00:30:52.671 "ana_reporting": false 00:30:52.671 }, 00:30:52.671 "vs": { 00:30:52.671 "nvme_version": "1.3" 00:30:52.671 }, 00:30:52.671 "ns_data": { 00:30:52.671 "id": 1, 00:30:52.671 "can_share": true 00:30:52.671 } 00:30:52.671 } 00:30:52.671 ], 00:30:52.671 "mp_policy": "active_passive" 00:30:52.671 } 00:30:52.671 } 00:30:52.671 ] 00:30:52.671 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=736872 00:30:52.671 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:52.671 16:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:52.671 Running I/O for 10 seconds... 00:30:53.607 Latency(us) 00:30:53.607 [2024-10-14T14:55:58.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.607 Nvme0n1 : 1.00 22451.00 87.70 0.00 0.00 0.00 0.00 0.00 00:30:53.607 [2024-10-14T14:55:58.241Z] =================================================================================================================== 00:30:53.607 [2024-10-14T14:55:58.241Z] Total : 22451.00 87.70 0.00 0.00 0.00 0.00 0.00 00:30:53.607 00:30:54.543 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:30:54.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.801 Nvme0n1 : 2.00 22888.00 89.41 0.00 0.00 0.00 0.00 0.00 00:30:54.801 [2024-10-14T14:55:59.435Z] =================================================================================================================== 00:30:54.801 [2024-10-14T14:55:59.435Z] Total : 22888.00 89.41 0.00 0.00 0.00 0.00 0.00 00:30:54.801 00:30:54.801 true 00:30:54.801 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:30:54.801 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:55.060 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:55.060 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:55.060 16:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 736872 00:30:55.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.626 Nvme0n1 : 3.00 22947.33 89.64 0.00 0.00 0.00 0.00 0.00 00:30:55.626 [2024-10-14T14:56:00.260Z] =================================================================================================================== 00:30:55.626 [2024-10-14T14:56:00.260Z] Total : 22947.33 89.64 0.00 0.00 0.00 0.00 0.00 00:30:55.626 00:30:56.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:56.560 Nvme0n1 : 4.00 23087.25 90.18 0.00 0.00 0.00 0.00 0.00 00:30:56.560 [2024-10-14T14:56:01.194Z] =================================================================================================================== 00:30:56.560 [2024-10-14T14:56:01.194Z] Total : 23087.25 90.18 0.00 0.00 0.00 0.00 0.00 00:30:56.560 00:30:57.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.937 Nvme0n1 : 5.00 23170.60 90.51 0.00 0.00 0.00 0.00 0.00 00:30:57.937 [2024-10-14T14:56:02.571Z] =================================================================================================================== 00:30:57.937 [2024-10-14T14:56:02.571Z] Total : 23170.60 90.51 0.00 0.00 0.00 0.00 0.00 00:30:57.937 00:30:58.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:58.875 Nvme0n1 : 6.00 23230.67 90.74 0.00 0.00 0.00 0.00 0.00 00:30:58.875 [2024-10-14T14:56:03.509Z] =================================================================================================================== 00:30:58.875 [2024-10-14T14:56:03.509Z] Total : 23230.67 90.74 0.00 0.00 0.00 0.00 0.00 00:30:58.875 00:30:59.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.812 Nvme0n1 : 7.00 23274.57 90.92 0.00 0.00 0.00 0.00 0.00 00:30:59.812 [2024-10-14T14:56:04.446Z] =================================================================================================================== 00:30:59.812 [2024-10-14T14:56:04.446Z] Total : 23274.57 90.92 0.00 0.00 0.00 0.00 0.00 00:30:59.812 00:31:00.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.749 Nvme0n1 : 8.00 23320.12 91.09 0.00 0.00 0.00 0.00 0.00 00:31:00.749 [2024-10-14T14:56:05.383Z] =================================================================================================================== 00:31:00.749 [2024-10-14T14:56:05.383Z] Total : 23320.12 91.09 0.00 0.00 0.00 0.00 0.00 00:31:00.749 00:31:01.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:01.686 Nvme0n1 : 9.00 23342.22 91.18 0.00 0.00 0.00 0.00 0.00 00:31:01.686 [2024-10-14T14:56:06.320Z] =================================================================================================================== 00:31:01.686 [2024-10-14T14:56:06.320Z] Total : 23342.22 91.18 0.00 0.00 0.00 0.00 0.00 00:31:01.686 00:31:02.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.621 Nvme0n1 : 10.00 23352.10 91.22 0.00 0.00 0.00 0.00 0.00 00:31:02.621 [2024-10-14T14:56:07.255Z] =================================================================================================================== 00:31:02.621 [2024-10-14T14:56:07.255Z] Total : 23352.10 91.22 0.00 0.00 0.00 0.00 0.00 00:31:02.621 00:31:02.621 00:31:02.621 Latency(us) 00:31:02.621 [2024-10-14T14:56:07.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.621 Nvme0n1 : 10.00 23354.62 91.23 0.00 0.00 5477.49 3120.76 27712.37 00:31:02.621 [2024-10-14T14:56:07.255Z] =================================================================================================================== 00:31:02.621 [2024-10-14T14:56:07.255Z] Total : 23354.62 91.23 0.00 0.00 5477.49 3120.76 27712.37 00:31:02.621 { 00:31:02.621 "results": [ 00:31:02.621 { 00:31:02.621 "job": "Nvme0n1", 00:31:02.621 "core_mask": "0x2", 00:31:02.621 "workload": "randwrite", 00:31:02.621 "status": "finished", 00:31:02.621 "queue_depth": 128, 00:31:02.621 "io_size": 4096, 00:31:02.621 "runtime": 10.003675, 00:31:02.621 "iops": 23354.617178187018, 00:31:02.621 "mibps": 91.22897335229304, 00:31:02.621 "io_failed": 0, 00:31:02.621 "io_timeout": 0, 00:31:02.621 "avg_latency_us": 5477.487175272794, 00:31:02.621 "min_latency_us": 3120.7619047619046, 00:31:02.621 "max_latency_us": 27712.365714285716 00:31:02.621 } 00:31:02.621 ], 00:31:02.621 "core_count": 1 00:31:02.621 } 00:31:02.621 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 736857 00:31:02.621 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 736857 ']' 00:31:02.621 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 736857 00:31:02.621 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:02.621 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:02.621 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 736857 00:31:02.879 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:02.879 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:02.879 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 736857' 00:31:02.879 killing process with pid 736857 00:31:02.879 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 736857 00:31:02.879 Received shutdown signal, test time was about 10.000000 seconds 00:31:02.879 00:31:02.879 Latency(us) 00:31:02.879 [2024-10-14T14:56:07.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.879 [2024-10-14T14:56:07.513Z] =================================================================================================================== 00:31:02.879 [2024-10-14T14:56:07.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.879 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 736857 00:31:02.879 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:03.137 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.395 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:03.395 16:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:03.395 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:03.395 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:03.395 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 733778 00:31:03.395 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 733778 00:31:03.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 733778 Killed "${NVMF_APP[@]}" "$@" 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=738700 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 738700 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 738700 ']' 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.654 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:03.654 [2024-10-14 16:56:08.094860] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:03.654 [2024-10-14 16:56:08.095773] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:03.654 [2024-10-14 16:56:08.095807] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.654 [2024-10-14 16:56:08.163888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.654 [2024-10-14 16:56:08.204377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.654 [2024-10-14 16:56:08.204408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.654 [2024-10-14 16:56:08.204415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.654 [2024-10-14 16:56:08.204423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.654 [2024-10-14 16:56:08.204429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.654 [2024-10-14 16:56:08.204935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.654 [2024-10-14 16:56:08.271312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:03.654 [2024-10-14 16:56:08.271527] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.912 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:03.912 [2024-10-14 16:56:08.506271] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:03.912 [2024-10-14 16:56:08.506474] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:03.912 [2024-10-14 16:56:08.506556] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:03.913 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:04.171 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04e498b0-33f4-4ea7-884f-559c71a0e9a2 -t 2000 00:31:04.430 [ 00:31:04.430 { 00:31:04.430 "name": "04e498b0-33f4-4ea7-884f-559c71a0e9a2", 00:31:04.430 "aliases": [ 00:31:04.430 "lvs/lvol" 00:31:04.430 ], 00:31:04.430 "product_name": "Logical Volume", 00:31:04.430 "block_size": 4096, 00:31:04.430 "num_blocks": 38912, 00:31:04.430 "uuid": "04e498b0-33f4-4ea7-884f-559c71a0e9a2", 00:31:04.430 "assigned_rate_limits": { 00:31:04.430 "rw_ios_per_sec": 0, 00:31:04.430 "rw_mbytes_per_sec": 0, 00:31:04.430 "r_mbytes_per_sec": 0, 00:31:04.430 "w_mbytes_per_sec": 0 00:31:04.430 }, 00:31:04.430 "claimed": false, 00:31:04.430 "zoned": false, 00:31:04.430 "supported_io_types": { 00:31:04.430 "read": true, 00:31:04.430 "write": true, 00:31:04.430 "unmap": true, 00:31:04.430 "flush": false, 00:31:04.430 "reset": true, 00:31:04.430 "nvme_admin": false, 00:31:04.430 "nvme_io": false, 00:31:04.430 "nvme_io_md": false, 00:31:04.430 "write_zeroes": true, 00:31:04.430 "zcopy": false, 00:31:04.430 "get_zone_info": false, 00:31:04.430 "zone_management": false, 00:31:04.430 "zone_append": false, 00:31:04.430 "compare": false, 00:31:04.430 "compare_and_write": false, 00:31:04.430 "abort": false, 00:31:04.430 "seek_hole": true, 00:31:04.430 "seek_data": true, 00:31:04.430 "copy": false, 00:31:04.430 "nvme_iov_md": false 00:31:04.430 }, 00:31:04.430 "driver_specific": { 00:31:04.430 "lvol": { 00:31:04.430 "lvol_store_uuid": "222e45a3-06a0-42b0-85c3-10f29f40ac01", 00:31:04.430 "base_bdev": "aio_bdev", 00:31:04.430 "thin_provision": false, 00:31:04.430 "num_allocated_clusters": 38, 00:31:04.430 "snapshot": false, 00:31:04.430 "clone": false, 00:31:04.430 "esnap_clone": false 00:31:04.430 } 00:31:04.430 } 00:31:04.430 } 00:31:04.430 ] 00:31:04.430 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:04.430 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:04.430 16:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:04.689 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:04.689 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:04.689 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:04.689 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:04.689 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:04.947 [2024-10-14 16:56:09.445389] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:04.947 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:05.206 request: 00:31:05.206 { 00:31:05.206 "uuid": "222e45a3-06a0-42b0-85c3-10f29f40ac01", 00:31:05.206 "method": "bdev_lvol_get_lvstores", 00:31:05.206 "req_id": 1 00:31:05.206 } 00:31:05.206 Got JSON-RPC error response 00:31:05.206 response: 00:31:05.206 { 00:31:05.206 "code": -19, 00:31:05.206 "message": "No such device" 00:31:05.206 } 00:31:05.206 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:05.206 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:05.206 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:05.206 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:05.206 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:05.465 aio_bdev 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:05.465 16:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:05.465 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 04e498b0-33f4-4ea7-884f-559c71a0e9a2 -t 2000 00:31:05.725 [ 00:31:05.725 { 00:31:05.725 "name": "04e498b0-33f4-4ea7-884f-559c71a0e9a2", 00:31:05.725 "aliases": [ 00:31:05.725 "lvs/lvol" 00:31:05.725 ], 00:31:05.725 "product_name": "Logical Volume", 00:31:05.725 "block_size": 4096, 00:31:05.725 "num_blocks": 38912, 00:31:05.725 "uuid": "04e498b0-33f4-4ea7-884f-559c71a0e9a2", 00:31:05.725 "assigned_rate_limits": { 00:31:05.725 "rw_ios_per_sec": 0, 00:31:05.725 "rw_mbytes_per_sec": 0, 00:31:05.725 "r_mbytes_per_sec": 0, 00:31:05.725 "w_mbytes_per_sec": 0 00:31:05.725 }, 00:31:05.725 "claimed": false, 00:31:05.725 "zoned": false, 00:31:05.725 "supported_io_types": { 00:31:05.725 "read": true, 00:31:05.725 "write": true, 00:31:05.725 "unmap": true, 00:31:05.725 "flush": false, 00:31:05.725 "reset": true, 00:31:05.725 "nvme_admin": false, 00:31:05.725 "nvme_io": false, 00:31:05.725 "nvme_io_md": false, 00:31:05.725 "write_zeroes": true, 00:31:05.725 "zcopy": false, 00:31:05.725 "get_zone_info": false, 00:31:05.725 "zone_management": false, 00:31:05.725 "zone_append": false, 00:31:05.725 "compare": false, 00:31:05.725 "compare_and_write": false, 00:31:05.725 "abort": false, 00:31:05.725 "seek_hole": true, 00:31:05.725 "seek_data": true, 00:31:05.725 "copy": false, 00:31:05.725 "nvme_iov_md": false 00:31:05.725 }, 00:31:05.725 "driver_specific": { 00:31:05.725 "lvol": { 00:31:05.725 "lvol_store_uuid": "222e45a3-06a0-42b0-85c3-10f29f40ac01", 00:31:05.725 "base_bdev": "aio_bdev", 00:31:05.725 "thin_provision": false, 00:31:05.725 "num_allocated_clusters": 38, 00:31:05.725 "snapshot": false, 00:31:05.725 "clone": false, 00:31:05.725 "esnap_clone": false 00:31:05.725 } 00:31:05.725 } 00:31:05.725 } 00:31:05.725 ] 00:31:05.725 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:05.725 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:05.725 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:05.984 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:05.984 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:05.984 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:06.244 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:06.244 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 04e498b0-33f4-4ea7-884f-559c71a0e9a2 00:31:06.244 16:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 222e45a3-06a0-42b0-85c3-10f29f40ac01 00:31:06.503 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:06.762 00:31:06.762 real 0m16.839s 00:31:06.762 user 0m34.162s 00:31:06.762 sys 0m3.898s 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:06.762 ************************************ 00:31:06.762 END TEST lvs_grow_dirty 00:31:06.762 ************************************ 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:06.762 nvmf_trace.0 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.762 rmmod nvme_tcp 00:31:06.762 rmmod nvme_fabrics 00:31:06.762 rmmod nvme_keyring 00:31:06.762 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 738700 ']' 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 738700 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 738700 ']' 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 738700 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 738700 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 738700' 00:31:07.021 killing process with pid 738700 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 738700 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 738700 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.021 16:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.557 00:31:09.557 real 0m41.835s 00:31:09.557 user 0m51.958s 00:31:09.557 sys 0m10.311s 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:09.557 ************************************ 00:31:09.557 END TEST nvmf_lvs_grow 00:31:09.557 ************************************ 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.557 ************************************ 00:31:09.557 START TEST nvmf_bdev_io_wait 00:31:09.557 ************************************ 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:09.557 * Looking for test storage... 00:31:09.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.557 --rc genhtml_branch_coverage=1 00:31:09.557 --rc genhtml_function_coverage=1 00:31:09.557 --rc genhtml_legend=1 00:31:09.557 --rc geninfo_all_blocks=1 00:31:09.557 --rc geninfo_unexecuted_blocks=1 00:31:09.557 00:31:09.557 ' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.557 --rc genhtml_branch_coverage=1 00:31:09.557 --rc genhtml_function_coverage=1 00:31:09.557 --rc genhtml_legend=1 00:31:09.557 --rc geninfo_all_blocks=1 00:31:09.557 --rc geninfo_unexecuted_blocks=1 00:31:09.557 00:31:09.557 ' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.557 --rc genhtml_branch_coverage=1 00:31:09.557 --rc genhtml_function_coverage=1 00:31:09.557 --rc genhtml_legend=1 00:31:09.557 --rc geninfo_all_blocks=1 00:31:09.557 --rc geninfo_unexecuted_blocks=1 00:31:09.557 00:31:09.557 ' 00:31:09.557 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:09.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.557 --rc genhtml_branch_coverage=1 00:31:09.557 --rc genhtml_function_coverage=1 00:31:09.557 --rc genhtml_legend=1 00:31:09.557 --rc geninfo_all_blocks=1 00:31:09.558 --rc geninfo_unexecuted_blocks=1 00:31:09.558 00:31:09.558 ' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.558 16:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.129 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.130 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.130 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.130 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.130 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:31:16.130 00:31:16.130 --- 10.0.0.2 ping statistics --- 00:31:16.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.130 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:16.130 00:31:16.130 --- 10.0.0.1 ping statistics --- 00:31:16.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.130 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=742754 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 742754 00:31:16.130 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 742754 ']' 00:31:16.131 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.131 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:16.131 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.131 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:16.131 16:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 [2024-10-14 16:56:19.983222] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.131 [2024-10-14 16:56:19.984104] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:16.131 [2024-10-14 16:56:19.984138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.131 [2024-10-14 16:56:20.076189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.131 [2024-10-14 16:56:20.120613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.131 [2024-10-14 16:56:20.120650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.131 [2024-10-14 16:56:20.120657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.131 [2024-10-14 16:56:20.120674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.131 [2024-10-14 16:56:20.120679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.131 [2024-10-14 16:56:20.122216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.131 [2024-10-14 16:56:20.122329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.131 [2024-10-14 16:56:20.122344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.131 [2024-10-14 16:56:20.122349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.131 [2024-10-14 16:56:20.122762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 [2024-10-14 16:56:20.266438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.131 [2024-10-14 16:56:20.266624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:16.131 [2024-10-14 16:56:20.267055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:16.131 [2024-10-14 16:56:20.267098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 [2024-10-14 16:56:20.278883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 Malloc0 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:16.131 [2024-10-14 16:56:20.343229] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=742777 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=742779 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:16.131 { 00:31:16.131 "params": { 00:31:16.131 "name": "Nvme$subsystem", 00:31:16.131 "trtype": "$TEST_TRANSPORT", 00:31:16.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.131 "adrfam": "ipv4", 00:31:16.131 "trsvcid": "$NVMF_PORT", 00:31:16.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.131 "hdgst": ${hdgst:-false}, 00:31:16.131 "ddgst": ${ddgst:-false} 00:31:16.131 }, 00:31:16.131 "method": "bdev_nvme_attach_controller" 00:31:16.131 } 00:31:16.131 EOF 00:31:16.131 )") 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=742781 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:16.131 { 00:31:16.131 "params": { 00:31:16.131 "name": "Nvme$subsystem", 00:31:16.131 "trtype": "$TEST_TRANSPORT", 00:31:16.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.131 "adrfam": "ipv4", 00:31:16.131 "trsvcid": "$NVMF_PORT", 00:31:16.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.131 "hdgst": ${hdgst:-false}, 00:31:16.131 "ddgst": ${ddgst:-false} 00:31:16.131 }, 00:31:16.131 "method": "bdev_nvme_attach_controller" 00:31:16.131 } 00:31:16.131 EOF 00:31:16.131 )") 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=742784 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:16.131 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:16.131 { 00:31:16.131 "params": { 00:31:16.131 "name": "Nvme$subsystem", 00:31:16.131 "trtype": "$TEST_TRANSPORT", 00:31:16.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.131 "adrfam": "ipv4", 00:31:16.132 "trsvcid": "$NVMF_PORT", 00:31:16.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.132 "hdgst": ${hdgst:-false}, 00:31:16.132 "ddgst": ${ddgst:-false} 00:31:16.132 }, 00:31:16.132 "method": "bdev_nvme_attach_controller" 00:31:16.132 } 00:31:16.132 EOF 00:31:16.132 )") 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:16.132 { 00:31:16.132 "params": { 00:31:16.132 "name": "Nvme$subsystem", 00:31:16.132 "trtype": "$TEST_TRANSPORT", 00:31:16.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.132 "adrfam": "ipv4", 00:31:16.132 "trsvcid": "$NVMF_PORT", 00:31:16.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.132 "hdgst": ${hdgst:-false}, 00:31:16.132 "ddgst": ${ddgst:-false} 00:31:16.132 }, 00:31:16.132 "method": "bdev_nvme_attach_controller" 00:31:16.132 } 00:31:16.132 EOF 00:31:16.132 )") 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 742777 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:16.132 "params": { 00:31:16.132 "name": "Nvme1", 00:31:16.132 "trtype": "tcp", 00:31:16.132 "traddr": "10.0.0.2", 00:31:16.132 "adrfam": "ipv4", 00:31:16.132 "trsvcid": "4420", 00:31:16.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.132 "hdgst": false, 00:31:16.132 "ddgst": false 00:31:16.132 }, 00:31:16.132 "method": "bdev_nvme_attach_controller" 00:31:16.132 }' 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:16.132 "params": { 00:31:16.132 "name": "Nvme1", 00:31:16.132 "trtype": "tcp", 00:31:16.132 "traddr": "10.0.0.2", 00:31:16.132 "adrfam": "ipv4", 00:31:16.132 "trsvcid": "4420", 00:31:16.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.132 "hdgst": false, 00:31:16.132 "ddgst": false 00:31:16.132 }, 00:31:16.132 "method": "bdev_nvme_attach_controller" 00:31:16.132 }' 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:16.132 "params": { 00:31:16.132 "name": "Nvme1", 00:31:16.132 "trtype": "tcp", 00:31:16.132 "traddr": "10.0.0.2", 00:31:16.132 "adrfam": "ipv4", 00:31:16.132 "trsvcid": "4420", 00:31:16.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.132 "hdgst": false, 00:31:16.132 "ddgst": false 00:31:16.132 }, 00:31:16.132 "method": "bdev_nvme_attach_controller" 00:31:16.132 }' 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:16.132 16:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:16.132 "params": { 00:31:16.132 "name": "Nvme1", 00:31:16.132 "trtype": "tcp", 00:31:16.132 "traddr": "10.0.0.2", 00:31:16.132 "adrfam": "ipv4", 00:31:16.132 "trsvcid": "4420", 00:31:16.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.132 "hdgst": false, 00:31:16.132 "ddgst": false 00:31:16.132 }, 00:31:16.132 "method": "bdev_nvme_attach_controller" 00:31:16.132 }' 00:31:16.132 [2024-10-14 16:56:20.395162] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:16.132 [2024-10-14 16:56:20.395213] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:16.132 [2024-10-14 16:56:20.396701] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:16.132 [2024-10-14 16:56:20.396742] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:16.132 [2024-10-14 16:56:20.397027] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:16.132 [2024-10-14 16:56:20.397064] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:16.132 [2024-10-14 16:56:20.400641] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:16.132 [2024-10-14 16:56:20.400686] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:16.132 [2024-10-14 16:56:20.560064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.132 [2024-10-14 16:56:20.602525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:16.132 [2024-10-14 16:56:20.664023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.132 [2024-10-14 16:56:20.706722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:16.132 [2024-10-14 16:56:20.763727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.391 [2024-10-14 16:56:20.810905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:16.391 [2024-10-14 16:56:20.834888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.391 [2024-10-14 16:56:20.874350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:16.391 Running I/O for 1 seconds... 00:31:16.391 Running I/O for 1 seconds... 00:31:16.650 Running I/O for 1 seconds... 00:31:16.650 Running I/O for 1 seconds... 00:31:17.585 252216.00 IOPS, 985.22 MiB/s 00:31:17.585 Latency(us) 00:31:17.585 [2024-10-14T14:56:22.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.585 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:17.585 Nvme1n1 : 1.00 251814.55 983.65 0.00 0.00 506.17 224.30 1591.59 00:31:17.585 [2024-10-14T14:56:22.219Z] =================================================================================================================== 00:31:17.585 [2024-10-14T14:56:22.219Z] Total : 251814.55 983.65 0.00 0.00 506.17 224.30 1591.59 00:31:17.585 8206.00 IOPS, 32.05 MiB/s 00:31:17.585 Latency(us) 00:31:17.585 [2024-10-14T14:56:22.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.585 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:17.585 Nvme1n1 : 1.02 8206.36 32.06 0.00 0.00 15476.08 3401.63 25215.76 00:31:17.586 [2024-10-14T14:56:22.220Z] =================================================================================================================== 00:31:17.586 [2024-10-14T14:56:22.220Z] Total : 8206.36 32.06 0.00 0.00 15476.08 3401.63 25215.76 00:31:17.586 11174.00 IOPS, 43.65 MiB/s 00:31:17.586 Latency(us) 00:31:17.586 [2024-10-14T14:56:22.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.586 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:17.586 Nvme1n1 : 1.01 11231.92 43.87 0.00 0.00 11356.35 1622.80 16477.62 00:31:17.586 [2024-10-14T14:56:22.220Z] =================================================================================================================== 00:31:17.586 [2024-10-14T14:56:22.220Z] Total : 11231.92 43.87 0.00 0.00 11356.35 1622.80 16477.62 00:31:17.586 8186.00 IOPS, 31.98 MiB/s 00:31:17.586 Latency(us) 00:31:17.586 [2024-10-14T14:56:22.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.586 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:17.586 Nvme1n1 : 1.00 8300.14 32.42 0.00 0.00 15392.22 2278.16 32955.25 00:31:17.586 [2024-10-14T14:56:22.220Z] =================================================================================================================== 00:31:17.586 [2024-10-14T14:56:22.220Z] Total : 8300.14 32.42 0.00 0.00 15392.22 2278.16 32955.25 00:31:17.586 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 742779 00:31:17.586 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 742781 00:31:17.586 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 742784 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.844 rmmod nvme_tcp 00:31:17.844 rmmod nvme_fabrics 00:31:17.844 rmmod nvme_keyring 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 742754 ']' 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 742754 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 742754 ']' 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 742754 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742754 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742754' 00:31:17.844 killing process with pid 742754 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 742754 00:31:17.844 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 742754 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.102 16:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.007 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.007 00:31:20.007 real 0m10.845s 00:31:20.007 user 0m15.232s 00:31:20.007 sys 0m6.444s 00:31:20.007 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:20.007 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.007 ************************************ 00:31:20.007 END TEST nvmf_bdev_io_wait 00:31:20.007 ************************************ 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.267 ************************************ 00:31:20.267 START TEST nvmf_queue_depth 00:31:20.267 ************************************ 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:20.267 * Looking for test storage... 00:31:20.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.267 --rc genhtml_branch_coverage=1 00:31:20.267 --rc genhtml_function_coverage=1 00:31:20.267 --rc genhtml_legend=1 00:31:20.267 --rc geninfo_all_blocks=1 00:31:20.267 --rc geninfo_unexecuted_blocks=1 00:31:20.267 00:31:20.267 ' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.267 --rc genhtml_branch_coverage=1 00:31:20.267 --rc genhtml_function_coverage=1 00:31:20.267 --rc genhtml_legend=1 00:31:20.267 --rc geninfo_all_blocks=1 00:31:20.267 --rc geninfo_unexecuted_blocks=1 00:31:20.267 00:31:20.267 ' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.267 --rc genhtml_branch_coverage=1 00:31:20.267 --rc genhtml_function_coverage=1 00:31:20.267 --rc genhtml_legend=1 00:31:20.267 --rc geninfo_all_blocks=1 00:31:20.267 --rc geninfo_unexecuted_blocks=1 00:31:20.267 00:31:20.267 ' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:20.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.267 --rc genhtml_branch_coverage=1 00:31:20.267 --rc genhtml_function_coverage=1 00:31:20.267 --rc genhtml_legend=1 00:31:20.267 --rc geninfo_all_blocks=1 00:31:20.267 --rc geninfo_unexecuted_blocks=1 00:31:20.267 00:31:20.267 ' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.267 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.268 16:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:26.836 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:26.836 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.836 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:26.837 Found net devices under 0000:86:00.0: cvl_0_0 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:26.837 Found net devices under 0000:86:00.1: cvl_0_1 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:31:26.837 00:31:26.837 --- 10.0.0.2 ping statistics --- 00:31:26.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.837 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.837 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:31:26.837 00:31:26.837 --- 10.0.0.1 ping statistics --- 00:31:26.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.837 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=746618 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 746618 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 746618 ']' 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.837 16:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.837 [2024-10-14 16:56:30.877823] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:26.837 [2024-10-14 16:56:30.878777] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:26.837 [2024-10-14 16:56:30.878812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.837 [2024-10-14 16:56:30.950966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.837 [2024-10-14 16:56:30.991870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.837 [2024-10-14 16:56:30.991905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.837 [2024-10-14 16:56:30.991912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.837 [2024-10-14 16:56:30.991921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.837 [2024-10-14 16:56:30.991927] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.837 [2024-10-14 16:56:30.992455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.837 [2024-10-14 16:56:31.058148] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:26.837 [2024-10-14 16:56:31.058376] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.837 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 [2024-10-14 16:56:31.121109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 Malloc0 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 [2024-10-14 16:56:31.193257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=746795 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 746795 /var/tmp/bdevperf.sock 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 746795 ']' 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:26.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:26.838 [2024-10-14 16:56:31.244830] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:26.838 [2024-10-14 16:56:31.244869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746795 ] 00:31:26.838 [2024-10-14 16:56:31.312617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.838 [2024-10-14 16:56:31.353033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.838 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:27.096 NVMe0n1 00:31:27.096 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:27.097 16:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:27.355 Running I/O for 10 seconds... 00:31:29.226 12282.00 IOPS, 47.98 MiB/s [2024-10-14T14:56:34.796Z] 12299.50 IOPS, 48.04 MiB/s [2024-10-14T14:56:36.172Z] 12468.33 IOPS, 48.70 MiB/s [2024-10-14T14:56:37.108Z] 12552.50 IOPS, 49.03 MiB/s [2024-10-14T14:56:38.051Z] 12628.20 IOPS, 49.33 MiB/s [2024-10-14T14:56:38.996Z] 12638.33 IOPS, 49.37 MiB/s [2024-10-14T14:56:39.933Z] 12703.14 IOPS, 49.62 MiB/s [2024-10-14T14:56:40.867Z] 12702.62 IOPS, 49.62 MiB/s [2024-10-14T14:56:41.802Z] 12743.89 IOPS, 49.78 MiB/s [2024-10-14T14:56:42.062Z] 12742.80 IOPS, 49.78 MiB/s 00:31:37.428 Latency(us) 00:31:37.428 [2024-10-14T14:56:42.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.428 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:37.428 Verification LBA range: start 0x0 length 0x4000 00:31:37.428 NVMe0n1 : 10.05 12773.18 49.90 0.00 0.00 79884.55 10860.25 48184.56 00:31:37.428 [2024-10-14T14:56:42.062Z] =================================================================================================================== 00:31:37.428 [2024-10-14T14:56:42.062Z] Total : 12773.18 49.90 0.00 0.00 79884.55 10860.25 48184.56 00:31:37.428 { 00:31:37.428 "results": [ 00:31:37.428 { 00:31:37.428 "job": "NVMe0n1", 00:31:37.428 "core_mask": "0x1", 00:31:37.428 "workload": "verify", 00:31:37.428 "status": "finished", 00:31:37.428 "verify_range": { 00:31:37.428 "start": 0, 00:31:37.428 "length": 16384 00:31:37.428 }, 00:31:37.428 "queue_depth": 1024, 00:31:37.428 "io_size": 4096, 00:31:37.428 "runtime": 10.049338, 00:31:37.428 "iops": 12773.179686064894, 00:31:37.428 "mibps": 49.89523314869099, 00:31:37.428 "io_failed": 0, 00:31:37.428 "io_timeout": 0, 00:31:37.428 "avg_latency_us": 79884.54734121729, 00:31:37.428 "min_latency_us": 10860.251428571428, 00:31:37.428 "max_latency_us": 48184.56380952381 00:31:37.428 } 00:31:37.428 ], 00:31:37.428 "core_count": 1 00:31:37.428 } 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 746795 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 746795 ']' 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 746795 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 746795 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 746795' 00:31:37.428 killing process with pid 746795 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 746795 00:31:37.428 Received shutdown signal, test time was about 10.000000 seconds 00:31:37.428 00:31:37.428 Latency(us) 00:31:37.428 [2024-10-14T14:56:42.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.428 [2024-10-14T14:56:42.062Z] =================================================================================================================== 00:31:37.428 [2024-10-14T14:56:42.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:37.428 16:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 746795 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.686 rmmod nvme_tcp 00:31:37.686 rmmod nvme_fabrics 00:31:37.686 rmmod nvme_keyring 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 746618 ']' 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 746618 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 746618 ']' 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 746618 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 746618 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 746618' 00:31:37.686 killing process with pid 746618 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 746618 00:31:37.686 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 746618 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.944 16:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.015 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.015 00:31:40.015 real 0m19.755s 00:31:40.015 user 0m22.776s 00:31:40.015 sys 0m6.358s 00:31:40.015 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:40.016 ************************************ 00:31:40.016 END TEST nvmf_queue_depth 00:31:40.016 ************************************ 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.016 ************************************ 00:31:40.016 START TEST nvmf_target_multipath 00:31:40.016 ************************************ 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:40.016 * Looking for test storage... 00:31:40.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:31:40.016 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.275 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:40.276 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:40.277 16:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:46.846 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:46.846 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:46.846 Found net devices under 0000:86:00.0: cvl_0_0 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.846 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:46.847 Found net devices under 0000:86:00.1: cvl_0_1 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:31:46.847 00:31:46.847 --- 10.0.0.2 ping statistics --- 00:31:46.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.847 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:31:46.847 00:31:46.847 --- 10.0.0.1 ping statistics --- 00:31:46.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.847 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:46.847 only one NIC for nvmf test 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.847 rmmod nvme_tcp 00:31:46.847 rmmod nvme_fabrics 00:31:46.847 rmmod nvme_keyring 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.847 16:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.225 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:48.225 00:31:48.225 real 0m8.277s 00:31:48.225 user 0m1.824s 00:31:48.225 sys 0m4.473s 00:31:48.226 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:48.226 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:48.226 ************************************ 00:31:48.226 END TEST nvmf_target_multipath 00:31:48.226 ************************************ 00:31:48.226 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:48.226 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:48.226 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:48.226 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:48.485 ************************************ 00:31:48.485 START TEST nvmf_zcopy 00:31:48.485 ************************************ 00:31:48.485 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:48.485 * Looking for test storage... 00:31:48.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:48.485 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:48.485 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:31:48.485 16:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.485 --rc genhtml_branch_coverage=1 00:31:48.485 --rc genhtml_function_coverage=1 00:31:48.485 --rc genhtml_legend=1 00:31:48.485 --rc geninfo_all_blocks=1 00:31:48.485 --rc geninfo_unexecuted_blocks=1 00:31:48.485 00:31:48.485 ' 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.485 --rc genhtml_branch_coverage=1 00:31:48.485 --rc genhtml_function_coverage=1 00:31:48.485 --rc genhtml_legend=1 00:31:48.485 --rc geninfo_all_blocks=1 00:31:48.485 --rc geninfo_unexecuted_blocks=1 00:31:48.485 00:31:48.485 ' 00:31:48.485 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:48.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.485 --rc genhtml_branch_coverage=1 00:31:48.485 --rc genhtml_function_coverage=1 00:31:48.486 --rc genhtml_legend=1 00:31:48.486 --rc geninfo_all_blocks=1 00:31:48.486 --rc geninfo_unexecuted_blocks=1 00:31:48.486 00:31:48.486 ' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:48.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.486 --rc genhtml_branch_coverage=1 00:31:48.486 --rc genhtml_function_coverage=1 00:31:48.486 --rc genhtml_legend=1 00:31:48.486 --rc geninfo_all_blocks=1 00:31:48.486 --rc geninfo_unexecuted_blocks=1 00:31:48.486 00:31:48.486 ' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.486 16:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.054 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.054 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.055 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.055 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.055 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:31:55.055 00:31:55.055 --- 10.0.0.2 ping statistics --- 00:31:55.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.055 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:31:55.055 00:31:55.055 --- 10.0.0.1 ping statistics --- 00:31:55.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.055 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=755443 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 755443 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 755443 ']' 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:55.055 16:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.055 [2024-10-14 16:56:59.010661] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.055 [2024-10-14 16:56:59.011538] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:55.055 [2024-10-14 16:56:59.011570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.055 [2024-10-14 16:56:59.085044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.055 [2024-10-14 16:56:59.127645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.055 [2024-10-14 16:56:59.127681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.055 [2024-10-14 16:56:59.127688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.055 [2024-10-14 16:56:59.127695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.055 [2024-10-14 16:56:59.127700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.055 [2024-10-14 16:56:59.128298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.055 [2024-10-14 16:56:59.194650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.055 [2024-10-14 16:56:59.194889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.055 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:55.055 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:31:55.055 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:55.055 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 [2024-10-14 16:56:59.272935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 [2024-10-14 16:56:59.297243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 malloc0 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:55.056 { 00:31:55.056 "params": { 00:31:55.056 "name": "Nvme$subsystem", 00:31:55.056 "trtype": "$TEST_TRANSPORT", 00:31:55.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:55.056 "adrfam": "ipv4", 00:31:55.056 "trsvcid": "$NVMF_PORT", 00:31:55.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:55.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:55.056 "hdgst": ${hdgst:-false}, 00:31:55.056 "ddgst": ${ddgst:-false} 00:31:55.056 }, 00:31:55.056 "method": "bdev_nvme_attach_controller" 00:31:55.056 } 00:31:55.056 EOF 00:31:55.056 )") 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:31:55.056 16:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:55.056 "params": { 00:31:55.056 "name": "Nvme1", 00:31:55.056 "trtype": "tcp", 00:31:55.056 "traddr": "10.0.0.2", 00:31:55.056 "adrfam": "ipv4", 00:31:55.056 "trsvcid": "4420", 00:31:55.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:55.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:55.056 "hdgst": false, 00:31:55.056 "ddgst": false 00:31:55.056 }, 00:31:55.056 "method": "bdev_nvme_attach_controller" 00:31:55.056 }' 00:31:55.056 [2024-10-14 16:56:59.390715] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:31:55.056 [2024-10-14 16:56:59.390760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755465 ] 00:31:55.056 [2024-10-14 16:56:59.459579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.056 [2024-10-14 16:56:59.501436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.315 Running I/O for 10 seconds... 00:31:57.628 8235.00 IOPS, 64.34 MiB/s [2024-10-14T14:57:02.828Z] 8315.00 IOPS, 64.96 MiB/s [2024-10-14T14:57:04.204Z] 8346.33 IOPS, 65.21 MiB/s [2024-10-14T14:57:05.140Z] 8361.50 IOPS, 65.32 MiB/s [2024-10-14T14:57:06.076Z] 8357.00 IOPS, 65.29 MiB/s [2024-10-14T14:57:07.013Z] 8354.83 IOPS, 65.27 MiB/s [2024-10-14T14:57:07.950Z] 8354.57 IOPS, 65.27 MiB/s [2024-10-14T14:57:08.884Z] 8350.12 IOPS, 65.24 MiB/s [2024-10-14T14:57:10.262Z] 8362.00 IOPS, 65.33 MiB/s [2024-10-14T14:57:10.262Z] 8370.40 IOPS, 65.39 MiB/s 00:32:05.628 Latency(us) 00:32:05.628 [2024-10-14T14:57:10.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.628 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:05.628 Verification LBA range: start 0x0 length 0x1000 00:32:05.628 Nvme1n1 : 10.01 8373.49 65.42 0.00 0.00 15243.72 2481.01 21221.18 00:32:05.628 [2024-10-14T14:57:10.262Z] =================================================================================================================== 00:32:05.628 [2024-10-14T14:57:10.262Z] Total : 8373.49 65.42 0.00 0.00 15243.72 2481.01 21221.18 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=757545 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:05.628 { 00:32:05.628 "params": { 00:32:05.628 "name": "Nvme$subsystem", 00:32:05.628 "trtype": "$TEST_TRANSPORT", 00:32:05.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.628 "adrfam": "ipv4", 00:32:05.628 "trsvcid": "$NVMF_PORT", 00:32:05.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.628 "hdgst": ${hdgst:-false}, 00:32:05.628 "ddgst": ${ddgst:-false} 00:32:05.628 }, 00:32:05.628 "method": "bdev_nvme_attach_controller" 00:32:05.628 } 00:32:05.628 EOF 00:32:05.628 )") 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:05.628 [2024-10-14 16:57:10.020640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.020676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:05.628 16:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:05.628 "params": { 00:32:05.628 "name": "Nvme1", 00:32:05.628 "trtype": "tcp", 00:32:05.628 "traddr": "10.0.0.2", 00:32:05.628 "adrfam": "ipv4", 00:32:05.628 "trsvcid": "4420", 00:32:05.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:05.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:05.628 "hdgst": false, 00:32:05.628 "ddgst": false 00:32:05.628 }, 00:32:05.628 "method": "bdev_nvme_attach_controller" 00:32:05.628 }' 00:32:05.628 [2024-10-14 16:57:10.032598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.032618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.044591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.044608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.056596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.056614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.056743] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:32:05.628 [2024-10-14 16:57:10.056783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid757545 ] 00:32:05.628 [2024-10-14 16:57:10.068589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.068599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.080589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.080605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.092591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.092606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.104590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.104606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.116590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.116606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.123949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.628 [2024-10-14 16:57:10.128589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.128608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.140590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.628 [2024-10-14 16:57:10.140613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.628 [2024-10-14 16:57:10.152590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.152608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.164589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.164605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.168445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.629 [2024-10-14 16:57:10.176606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.176622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.188606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.188624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.200592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.200612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.212592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.212611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.224593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.224611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.236592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.236611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.248590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.248608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.629 [2024-10-14 16:57:10.260612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.629 [2024-10-14 16:57:10.260634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.272598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.272626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.284596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.284614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.296589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.296605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.308590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.308606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.320591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.320610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.332596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.332614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.344589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.344605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.356593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.356610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.368588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.368598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.380591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.380611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.392590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.392605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.404590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.404604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.416591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.416609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.428592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.428610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.440588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.440596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.452591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.452605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.464592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.464608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.476598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.476620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 Running I/O for 5 seconds... 00:32:05.888 [2024-10-14 16:57:10.494582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.494606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.509298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.509317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:05.888 [2024-10-14 16:57:10.520518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:05.888 [2024-10-14 16:57:10.520537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.534197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.534220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.544104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.544122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.558190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.558209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.572884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.572901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.588204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.588222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.599707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.599725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.613717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.613740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.628793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.628812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.640253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.640270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.653317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.653335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.668597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.668621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.680400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.680419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.693906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.693925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.709042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.709060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.721847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.721865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.736778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.736796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.748216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.748235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.761615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.761632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.147 [2024-10-14 16:57:10.776639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.147 [2024-10-14 16:57:10.776659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.788774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.788798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.799385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.799404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.814255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.814273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.828979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.828996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.841779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.841797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.857059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.857082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.873237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.873255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.888631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.888648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.900154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.900172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.914013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.914030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.928796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.928814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.939877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.939895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.953302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.953320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.964425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.964442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.978819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.978836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:10.993506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:10.993525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:11.008757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:11.008775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:11.019348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:11.019366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.406 [2024-10-14 16:57:11.034166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.406 [2024-10-14 16:57:11.034185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.049038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.049065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.060324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.060342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.073638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.073657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.088697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.088716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.099862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.099880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.114166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.114183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.128710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.128728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.140059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.140077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.154565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.154582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.169277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.169293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.185042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.185065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.200207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.200226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.213233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.213251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.228956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.228974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.240362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.240380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.253981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.253999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.269031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.269048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.284512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.284532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.665 [2024-10-14 16:57:11.295569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.665 [2024-10-14 16:57:11.295586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.309474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.309497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.324289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.324308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.335512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.335530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.350051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.350069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.364920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.364937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.377542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.377559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.392768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.392785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.404884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.404901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.418011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.418029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.432431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.432449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.443616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.443635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.457829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.457849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.472122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.472142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.483620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.483648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 16480.00 IOPS, 128.75 MiB/s [2024-10-14T14:57:11.558Z] [2024-10-14 16:57:11.497746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.497765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.512818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.512836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.523787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.523805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.537443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.537461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:06.924 [2024-10-14 16:57:11.552425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:06.924 [2024-10-14 16:57:11.552443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.564448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.564467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.578145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.578164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.592793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.592810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.604232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.604250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.617474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.617493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.632153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.632173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.643836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.643855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.657726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.657743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.672451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.672470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.683793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.683813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.697732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.697750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.712308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.712326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.723887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.723905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.737880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.737898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.752032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.752050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.765391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.765409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.780933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.780951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.792295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.792314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.183 [2024-10-14 16:57:11.806399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.183 [2024-10-14 16:57:11.806418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.820838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.820859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.831633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.831652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.845947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.845966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.860617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.860635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.872181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.872198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.886270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.886288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.900880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.900897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.916086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.916104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.930452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.930470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.944557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.944574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.956085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.956103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.969385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.969403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.984738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.984755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:11.995693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:11.995711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:12.009923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:12.009940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:12.024411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:12.024429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:12.035505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:12.035523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:12.049882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:12.049901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:12.064372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:12.064390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.442 [2024-10-14 16:57:12.075694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.442 [2024-10-14 16:57:12.075712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.089595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.089620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.104383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.104403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.115936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.115953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.129394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.129413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.143803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.143822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.156740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.156759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.170134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.170152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.185127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.185146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.199891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.199909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.212154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.212173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.225360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.225377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.240057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.240075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.251553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.251570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.265433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.265450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.279865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.279883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.293366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.293383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.304732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.304749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.317631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.317653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.700 [2024-10-14 16:57:12.332308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.700 [2024-10-14 16:57:12.332325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.345418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.345435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.360484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.360501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.373287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.373305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.384625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.384643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.398180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.398198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.412622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.412640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.423813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.423830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.437248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.437265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.452581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.452598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.463664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.463682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.478120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.478138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 16588.00 IOPS, 129.59 MiB/s [2024-10-14T14:57:12.593Z] [2024-10-14 16:57:12.492265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.492283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.504103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.504120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.517353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.517371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.532835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.532854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.543568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.543586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.557967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.557984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.572621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.572643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.959 [2024-10-14 16:57:12.583636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.959 [2024-10-14 16:57:12.583653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.598259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.598277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.613134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.613152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.628695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.628713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.640133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.640151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.654577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.654595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.669437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.669454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.684627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.684644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.695577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.695594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.710099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.710116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.724324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.724342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.736531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.736548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.748710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.748728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.760422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.760439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.772912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.772929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.788812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.788829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.804628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.804645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.815898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.815915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.829535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.829564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.218 [2024-10-14 16:57:12.844310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.218 [2024-10-14 16:57:12.844331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.855404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.855422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.870254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.870272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.884255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.884274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.894934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.894952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.909721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.909740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.925064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.925083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.940497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.940516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.954005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.954023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.968490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.968508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.979970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.979988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:12.994124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:12.994141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.008839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.008861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.024598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.024621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.036868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.036886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.048818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.048836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.061923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.061942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.076508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.076526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.087888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.087907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.477 [2024-10-14 16:57:13.102558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.477 [2024-10-14 16:57:13.102576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.736 [2024-10-14 16:57:13.116861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.736 [2024-10-14 16:57:13.116879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.736 [2024-10-14 16:57:13.128147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.736 [2024-10-14 16:57:13.128166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.736 [2024-10-14 16:57:13.141889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.736 [2024-10-14 16:57:13.141908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.736 [2024-10-14 16:57:13.157095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.736 [2024-10-14 16:57:13.157113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.736 [2024-10-14 16:57:13.172390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.736 [2024-10-14 16:57:13.172409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.736 [2024-10-14 16:57:13.183830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.183849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.198429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.198448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.212612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.212631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.223384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.223402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.238407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.238424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.252668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.252685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.263811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.263829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.277277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.277295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.292327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.292345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.305271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.305289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.320926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.320944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.331893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.331910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.345991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.346008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.360100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.360117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.737 [2024-10-14 16:57:13.372305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.737 [2024-10-14 16:57:13.372323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.386662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.386681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.401289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.401307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.416401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.416420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.428274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.428291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.442382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.442399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.456857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.456874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.468157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.468174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.481945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.481963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 16611.33 IOPS, 129.78 MiB/s [2024-10-14T14:57:13.630Z] [2024-10-14 16:57:13.496294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.496312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.507805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.507823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.522000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.522018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.536446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.536464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.547942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.547959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.562406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.562423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.577050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.577068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.590117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.590139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.604653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.604670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.615935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.615953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.996 [2024-10-14 16:57:13.630666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.996 [2024-10-14 16:57:13.630684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.645079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.645097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.660161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.660178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.673833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.673851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.688442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.688461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.699932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.699949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.713753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.713770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.728328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.728346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.739271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.739289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.753733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.753751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.769011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.769028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.781861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.781879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.797124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.797142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.811924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.811941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.824504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.824526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.836894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.836911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.850353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.850375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.865115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.865134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.255 [2024-10-14 16:57:13.879872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.255 [2024-10-14 16:57:13.879892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.894694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.894713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.909195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.909213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.924106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.924124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.935618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.935636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.950456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.950474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.965151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.965168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.980319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.980337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:13.993674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:13.993692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.008576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.008594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.021750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.021767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.036556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.036574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.047715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.047733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.061691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.061709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.076112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.076130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.088858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.088877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.099971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.099989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.113645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.113668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.128396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.128414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.514 [2024-10-14 16:57:14.139835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.514 [2024-10-14 16:57:14.139853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.154236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.154255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.169159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.169176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.180302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.180320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.193434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.193451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.208446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.208465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.219771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.219789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.233539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.233557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.248188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.248206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.262186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.773 [2024-10-14 16:57:14.262205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.773 [2024-10-14 16:57:14.276629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.276648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.287880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.287897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.301931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.301949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.316833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.316852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.327960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.327980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.342107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.342125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.356523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.356545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.367738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.367761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.382527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.382547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.397095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.397114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.774 [2024-10-14 16:57:14.408037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.774 [2024-10-14 16:57:14.408056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.421505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.421524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.437277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.437295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.452742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.452759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.468514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.468534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.479865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.479884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.493452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.493471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 16601.50 IOPS, 129.70 MiB/s [2024-10-14T14:57:14.667Z] [2024-10-14 16:57:14.509052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.509070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.520780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.520798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.532295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.532314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.546333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.546352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.560576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.560594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.571789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.571807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.585899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.585919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.600583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.600607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.611367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.611385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.625283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.625301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.641053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.641071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.652633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.652650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.033 [2024-10-14 16:57:14.666067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.033 [2024-10-14 16:57:14.666086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.680826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.680845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.692201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.692219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.705776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.705793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.721045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.721063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.736560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.736578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.748168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.748186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.761805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.761823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.776638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.776656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.787736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.787754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.802027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.802045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.816595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.816619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.827461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.827479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.842212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.842230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.856665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.856682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.868171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.868188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.880386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.880404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.893622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.893641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.908349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.908367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.293 [2024-10-14 16:57:14.919833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.293 [2024-10-14 16:57:14.919852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:14.934130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:14.934148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:14.944567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:14.944584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:14.957740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:14.957757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:14.972472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:14.972490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:14.983529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:14.983547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:14.997789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:14.997807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.012770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.012787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.023914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.023932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.038349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.038367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.052889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.052911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.064319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.064336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.078500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.078518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.093326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.093344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.108620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.108637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.120094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.120111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.134247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.134264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.148541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.148560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.159532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.159550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.552 [2024-10-14 16:57:15.174211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.552 [2024-10-14 16:57:15.174230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.188328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.188347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.199788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.199805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.212887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.212904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.226259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.226276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.240309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.240327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.251508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.251526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.266604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.266622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.280759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.280777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.292089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.292107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.305324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.305340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.316515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.316534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.328876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.328892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.341856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.341874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.356771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.356790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.372289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.372314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.385916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.385934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.400383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.400402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.411596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.411621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.426101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.426119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.812 [2024-10-14 16:57:15.440921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.812 [2024-10-14 16:57:15.440938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.456328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.456347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.467754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.467771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.480780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.480798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.493145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.493162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 16606.20 IOPS, 129.74 MiB/s [2024-10-14T14:57:15.706Z] [2024-10-14 16:57:15.504607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.504623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 00:32:11.072 Latency(us) 00:32:11.072 [2024-10-14T14:57:15.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.072 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:11.072 Nvme1n1 : 5.01 16607.46 129.75 0.00 0.00 7699.66 1997.29 13793.77 00:32:11.072 [2024-10-14T14:57:15.706Z] =================================================================================================================== 00:32:11.072 [2024-10-14T14:57:15.706Z] Total : 16607.46 129.75 0.00 0.00 7699.66 1997.29 13793.77 00:32:11.072 [2024-10-14 16:57:15.516591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.516611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.528597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.528617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.540599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.540624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.552594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.552612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.564596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.564617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.576594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.576622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.588590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.588608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.600591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.600608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.612587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.612598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.624587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.624595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.636591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.636606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.648587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.648596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 [2024-10-14 16:57:15.660591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.072 [2024-10-14 16:57:15.660606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (757545) - No such process 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 757545 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 delay0 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:11.072 16:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:11.331 [2024-10-14 16:57:15.798955] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:19.449 Initializing NVMe Controllers 00:32:19.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:19.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:19.449 Initialization complete. Launching workers. 00:32:19.449 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 7447 00:32:19.449 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7719, failed to submit 48 00:32:19.449 success 7595, unsuccessful 124, failed 0 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.449 rmmod nvme_tcp 00:32:19.449 rmmod nvme_fabrics 00:32:19.449 rmmod nvme_keyring 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 755443 ']' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 755443 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 755443 ']' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 755443 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 755443 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 755443' 00:32:19.449 killing process with pid 755443 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 755443 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 755443 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:19.449 16:57:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.385 00:32:20.385 real 0m32.092s 00:32:20.385 user 0m41.238s 00:32:20.385 sys 0m13.110s 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:20.385 ************************************ 00:32:20.385 END TEST nvmf_zcopy 00:32:20.385 ************************************ 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:20.385 16:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:20.645 ************************************ 00:32:20.645 START TEST nvmf_nmic 00:32:20.645 ************************************ 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:20.645 * Looking for test storage... 00:32:20.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:20.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.645 --rc genhtml_branch_coverage=1 00:32:20.645 --rc genhtml_function_coverage=1 00:32:20.645 --rc genhtml_legend=1 00:32:20.645 --rc geninfo_all_blocks=1 00:32:20.645 --rc geninfo_unexecuted_blocks=1 00:32:20.645 00:32:20.645 ' 00:32:20.645 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:20.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.645 --rc genhtml_branch_coverage=1 00:32:20.645 --rc genhtml_function_coverage=1 00:32:20.645 --rc genhtml_legend=1 00:32:20.645 --rc geninfo_all_blocks=1 00:32:20.645 --rc geninfo_unexecuted_blocks=1 00:32:20.645 00:32:20.645 ' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:20.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.646 --rc genhtml_branch_coverage=1 00:32:20.646 --rc genhtml_function_coverage=1 00:32:20.646 --rc genhtml_legend=1 00:32:20.646 --rc geninfo_all_blocks=1 00:32:20.646 --rc geninfo_unexecuted_blocks=1 00:32:20.646 00:32:20.646 ' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:20.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.646 --rc genhtml_branch_coverage=1 00:32:20.646 --rc genhtml_function_coverage=1 00:32:20.646 --rc genhtml_legend=1 00:32:20.646 --rc geninfo_all_blocks=1 00:32:20.646 --rc geninfo_unexecuted_blocks=1 00:32:20.646 00:32:20.646 ' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.646 16:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:27.217 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:27.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:27.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:27.218 Found net devices under 0000:86:00.0: cvl_0_0 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:27.218 Found net devices under 0000:86:00.1: cvl_0_1 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:27.218 16:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:27.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:27.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:32:27.218 00:32:27.218 --- 10.0.0.2 ping statistics --- 00:32:27.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.218 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:27.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:27.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:32:27.218 00:32:27.218 --- 10.0.0.1 ping statistics --- 00:32:27.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:27.218 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=763154 00:32:27.218 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 763154 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 763154 ']' 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 [2024-10-14 16:57:31.228639] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:27.219 [2024-10-14 16:57:31.229518] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:32:27.219 [2024-10-14 16:57:31.229548] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:27.219 [2024-10-14 16:57:31.281476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:27.219 [2024-10-14 16:57:31.322917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:27.219 [2024-10-14 16:57:31.322954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:27.219 [2024-10-14 16:57:31.322961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:27.219 [2024-10-14 16:57:31.322967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:27.219 [2024-10-14 16:57:31.322972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:27.219 [2024-10-14 16:57:31.324384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.219 [2024-10-14 16:57:31.324491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:27.219 [2024-10-14 16:57:31.324578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.219 [2024-10-14 16:57:31.324579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:27.219 [2024-10-14 16:57:31.391260] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:27.219 [2024-10-14 16:57:31.392330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:27.219 [2024-10-14 16:57:31.392434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:27.219 [2024-10-14 16:57:31.393068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:27.219 [2024-10-14 16:57:31.393097] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 [2024-10-14 16:57:31.473378] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 Malloc0 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 [2024-10-14 16:57:31.553575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:27.219 test case1: single bdev can't be used in multiple subsystems 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 [2024-10-14 16:57:31.585082] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:27.219 [2024-10-14 16:57:31.585102] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:27.219 [2024-10-14 16:57:31.585109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:27.219 request: 00:32:27.219 { 00:32:27.219 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:27.219 "namespace": { 00:32:27.219 "bdev_name": "Malloc0", 00:32:27.219 "no_auto_visible": false 00:32:27.219 }, 00:32:27.219 "method": "nvmf_subsystem_add_ns", 00:32:27.219 "req_id": 1 00:32:27.219 } 00:32:27.219 Got JSON-RPC error response 00:32:27.219 response: 00:32:27.219 { 00:32:27.219 "code": -32602, 00:32:27.219 "message": "Invalid parameters" 00:32:27.219 } 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:27.219 Adding namespace failed - expected result. 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:27.219 test case2: host connect to nvmf target in multiple paths 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:27.219 [2024-10-14 16:57:31.597184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:27.219 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:27.478 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:27.478 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:32:27.478 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:27.478 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:27.478 16:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:32:29.383 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:29.383 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:29.383 16:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:29.383 16:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:29.383 16:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:29.383 16:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:32:29.383 16:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:29.649 [global] 00:32:29.649 thread=1 00:32:29.649 invalidate=1 00:32:29.649 rw=write 00:32:29.649 time_based=1 00:32:29.649 runtime=1 00:32:29.649 ioengine=libaio 00:32:29.649 direct=1 00:32:29.649 bs=4096 00:32:29.649 iodepth=1 00:32:29.649 norandommap=0 00:32:29.649 numjobs=1 00:32:29.649 00:32:29.649 verify_dump=1 00:32:29.649 verify_backlog=512 00:32:29.649 verify_state_save=0 00:32:29.649 do_verify=1 00:32:29.649 verify=crc32c-intel 00:32:29.649 [job0] 00:32:29.649 filename=/dev/nvme0n1 00:32:29.649 Could not set queue depth (nvme0n1) 00:32:29.906 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:29.906 fio-3.35 00:32:29.906 Starting 1 thread 00:32:31.276 00:32:31.276 job0: (groupid=0, jobs=1): err= 0: pid=763769: Mon Oct 14 16:57:35 2024 00:32:31.276 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:32:31.276 slat (nsec): min=9980, max=28664, avg=23194.00, stdev=3576.54 00:32:31.276 clat (usec): min=40827, max=41102, avg=40973.31, stdev=57.90 00:32:31.276 lat (usec): min=40853, max=41111, avg=40996.50, stdev=56.12 00:32:31.276 clat percentiles (usec): 00:32:31.276 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:31.276 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:31.276 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:31.276 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:31.276 | 99.99th=[41157] 00:32:31.276 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:32:31.276 slat (usec): min=10, max=27473, avg=65.67, stdev=1213.63 00:32:31.276 clat (usec): min=134, max=311, avg=185.55, stdev=48.99 00:32:31.276 lat (usec): min=145, max=27784, avg=251.22, stdev=1220.18 00:32:31.276 clat percentiles (usec): 00:32:31.276 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 141], 00:32:31.276 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 235], 00:32:31.276 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 245], 00:32:31.276 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 310], 99.95th=[ 310], 00:32:31.276 | 99.99th=[ 310] 00:32:31.276 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:31.276 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:31.276 lat (usec) : 250=94.57%, 500=1.31% 00:32:31.276 lat (msec) : 50=4.12% 00:32:31.276 cpu : usr=0.58%, sys=0.78%, ctx=536, majf=0, minf=1 00:32:31.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.276 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:31.276 00:32:31.276 Run status group 0 (all jobs): 00:32:31.276 READ: bw=85.3KiB/s (87.3kB/s), 85.3KiB/s-85.3KiB/s (87.3kB/s-87.3kB/s), io=88.0KiB (90.1kB), run=1032-1032msec 00:32:31.276 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:32:31.276 00:32:31.276 Disk stats (read/write): 00:32:31.276 nvme0n1: ios=43/512, merge=0/0, ticks=1726/89, in_queue=1815, util=98.40% 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:31.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.276 rmmod nvme_tcp 00:32:31.276 rmmod nvme_fabrics 00:32:31.276 rmmod nvme_keyring 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 763154 ']' 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 763154 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 763154 ']' 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 763154 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 763154 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 763154' 00:32:31.276 killing process with pid 763154 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 763154 00:32:31.276 16:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 763154 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.535 16:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.072 00:32:34.072 real 0m13.105s 00:32:34.072 user 0m24.352s 00:32:34.072 sys 0m5.936s 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:34.072 ************************************ 00:32:34.072 END TEST nvmf_nmic 00:32:34.072 ************************************ 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:34.072 ************************************ 00:32:34.072 START TEST nvmf_fio_target 00:32:34.072 ************************************ 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:34.072 * Looking for test storage... 00:32:34.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:34.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.072 --rc genhtml_branch_coverage=1 00:32:34.072 --rc genhtml_function_coverage=1 00:32:34.072 --rc genhtml_legend=1 00:32:34.072 --rc geninfo_all_blocks=1 00:32:34.072 --rc geninfo_unexecuted_blocks=1 00:32:34.072 00:32:34.072 ' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:34.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.072 --rc genhtml_branch_coverage=1 00:32:34.072 --rc genhtml_function_coverage=1 00:32:34.072 --rc genhtml_legend=1 00:32:34.072 --rc geninfo_all_blocks=1 00:32:34.072 --rc geninfo_unexecuted_blocks=1 00:32:34.072 00:32:34.072 ' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:34.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.072 --rc genhtml_branch_coverage=1 00:32:34.072 --rc genhtml_function_coverage=1 00:32:34.072 --rc genhtml_legend=1 00:32:34.072 --rc geninfo_all_blocks=1 00:32:34.072 --rc geninfo_unexecuted_blocks=1 00:32:34.072 00:32:34.072 ' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:34.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.072 --rc genhtml_branch_coverage=1 00:32:34.072 --rc genhtml_function_coverage=1 00:32:34.072 --rc genhtml_legend=1 00:32:34.072 --rc geninfo_all_blocks=1 00:32:34.072 --rc geninfo_unexecuted_blocks=1 00:32:34.072 00:32:34.072 ' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.072 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.073 16:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.351 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:39.352 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:39.352 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:39.352 Found net devices under 0000:86:00.0: cvl_0_0 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:39.352 Found net devices under 0000:86:00.1: cvl_0_1 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.352 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.647 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.647 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.647 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.647 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.647 16:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:32:39.647 00:32:39.647 --- 10.0.0.2 ping statistics --- 00:32:39.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.647 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:39.647 00:32:39.647 --- 10.0.0.1 ping statistics --- 00:32:39.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.647 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:39.647 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=767522 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 767522 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 767522 ']' 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:39.961 [2024-10-14 16:57:44.319819] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:39.961 [2024-10-14 16:57:44.320731] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:32:39.961 [2024-10-14 16:57:44.320764] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.961 [2024-10-14 16:57:44.392787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.961 [2024-10-14 16:57:44.435197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.961 [2024-10-14 16:57:44.435230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.961 [2024-10-14 16:57:44.435237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.961 [2024-10-14 16:57:44.435243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.961 [2024-10-14 16:57:44.435248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.961 [2024-10-14 16:57:44.436647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.961 [2024-10-14 16:57:44.436672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.961 [2024-10-14 16:57:44.436761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.961 [2024-10-14 16:57:44.436762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.961 [2024-10-14 16:57:44.504225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:39.961 [2024-10-14 16:57:44.504463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:39.961 [2024-10-14 16:57:44.505202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:39.961 [2024-10-14 16:57:44.505268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:39.961 [2024-10-14 16:57:44.505413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.961 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.220 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.220 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:40.220 [2024-10-14 16:57:44.745567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.220 16:57:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:40.478 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:40.478 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:40.737 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:40.737 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:40.996 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:40.996 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:40.996 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:40.996 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:41.256 16:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:41.515 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:41.515 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:41.774 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:41.774 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:41.774 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:41.774 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:42.032 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:42.292 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:42.292 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:42.550 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:42.550 16:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:42.550 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.809 [2024-10-14 16:57:47.329477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.809 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:43.068 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:32:43.327 16:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:32:45.860 16:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:45.860 [global] 00:32:45.860 thread=1 00:32:45.860 invalidate=1 00:32:45.860 rw=write 00:32:45.860 time_based=1 00:32:45.860 runtime=1 00:32:45.860 ioengine=libaio 00:32:45.860 direct=1 00:32:45.860 bs=4096 00:32:45.860 iodepth=1 00:32:45.860 norandommap=0 00:32:45.860 numjobs=1 00:32:45.860 00:32:45.860 verify_dump=1 00:32:45.860 verify_backlog=512 00:32:45.860 verify_state_save=0 00:32:45.860 do_verify=1 00:32:45.860 verify=crc32c-intel 00:32:45.860 [job0] 00:32:45.860 filename=/dev/nvme0n1 00:32:45.860 [job1] 00:32:45.860 filename=/dev/nvme0n2 00:32:45.860 [job2] 00:32:45.860 filename=/dev/nvme0n3 00:32:45.860 [job3] 00:32:45.860 filename=/dev/nvme0n4 00:32:45.860 Could not set queue depth (nvme0n1) 00:32:45.860 Could not set queue depth (nvme0n2) 00:32:45.860 Could not set queue depth (nvme0n3) 00:32:45.860 Could not set queue depth (nvme0n4) 00:32:45.860 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:45.860 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:45.860 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:45.860 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:45.860 fio-3.35 00:32:45.860 Starting 4 threads 00:32:47.238 00:32:47.238 job0: (groupid=0, jobs=1): err= 0: pid=768645: Mon Oct 14 16:57:51 2024 00:32:47.238 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:32:47.238 slat (nsec): min=7652, max=23273, avg=14843.57, stdev=6032.43 00:32:47.238 clat (usec): min=33431, max=41378, avg=40655.00, stdev=1581.90 00:32:47.238 lat (usec): min=33440, max=41387, avg=40669.85, stdev=1583.28 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[33424], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:47.238 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:47.238 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:47.238 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:47.238 | 99.99th=[41157] 00:32:47.238 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:32:47.238 slat (nsec): min=7823, max=38015, avg=10373.19, stdev=1584.54 00:32:47.238 clat (usec): min=144, max=394, avg=181.19, stdev=15.31 00:32:47.238 lat (usec): min=154, max=432, avg=191.56, stdev=16.08 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:32:47.238 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:32:47.238 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 198], 00:32:47.238 | 99.00th=[ 225], 99.50th=[ 241], 99.90th=[ 396], 99.95th=[ 396], 00:32:47.238 | 99.99th=[ 396] 00:32:47.238 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:32:47.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:47.238 lat (usec) : 250=95.33%, 500=0.37% 00:32:47.238 lat (msec) : 50=4.30% 00:32:47.238 cpu : usr=0.00%, sys=0.68%, ctx=537, majf=0, minf=1 00:32:47.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:47.238 job1: (groupid=0, jobs=1): err= 0: pid=768646: Mon Oct 14 16:57:51 2024 00:32:47.238 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:32:47.238 slat (nsec): min=10196, max=33810, avg=24521.91, stdev=4481.10 00:32:47.238 clat (usec): min=40884, max=41919, avg=41066.12, stdev=279.39 00:32:47.238 lat (usec): min=40907, max=41943, avg=41090.64, stdev=278.95 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:47.238 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:47.238 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:47.238 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:47.238 | 99.99th=[41681] 00:32:47.238 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:32:47.238 slat (nsec): min=10916, max=37706, avg=12415.78, stdev=2289.96 00:32:47.238 clat (usec): min=155, max=377, avg=178.58, stdev=15.64 00:32:47.238 lat (usec): min=167, max=414, avg=191.00, stdev=16.27 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:32:47.238 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 180], 00:32:47.238 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 198], 00:32:47.238 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 379], 99.95th=[ 379], 00:32:47.238 | 99.99th=[ 379] 00:32:47.238 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:32:47.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:47.238 lat (usec) : 250=95.51%, 500=0.37% 00:32:47.238 lat (msec) : 50=4.12% 00:32:47.238 cpu : usr=0.30%, sys=1.10%, ctx=535, majf=0, minf=1 00:32:47.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:47.238 job2: (groupid=0, jobs=1): err= 0: pid=768647: Mon Oct 14 16:57:51 2024 00:32:47.238 read: IOPS=2386, BW=9546KiB/s (9776kB/s)(9556KiB/1001msec) 00:32:47.238 slat (nsec): min=6866, max=33961, avg=7686.96, stdev=1149.41 00:32:47.238 clat (usec): min=172, max=443, avg=232.84, stdev=26.29 00:32:47.238 lat (usec): min=190, max=451, avg=240.53, stdev=26.33 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:32:47.238 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 245], 60.00th=[ 249], 00:32:47.238 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:32:47.238 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 420], 99.95th=[ 424], 00:32:47.238 | 99.99th=[ 445] 00:32:47.238 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:47.238 slat (nsec): min=9774, max=39485, avg=10998.58, stdev=1315.10 00:32:47.238 clat (usec): min=122, max=382, avg=151.11, stdev=25.49 00:32:47.238 lat (usec): min=133, max=409, avg=162.11, stdev=25.74 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[ 128], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:32:47.238 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:32:47.238 | 70.00th=[ 147], 80.00th=[ 163], 90.00th=[ 198], 95.00th=[ 206], 00:32:47.238 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 338], 99.95th=[ 343], 00:32:47.238 | 99.99th=[ 383] 00:32:47.238 bw ( KiB/s): min=12288, max=12288, per=77.63%, avg=12288.00, stdev= 0.00, samples=1 00:32:47.238 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:47.238 lat (usec) : 250=81.61%, 500=18.39% 00:32:47.238 cpu : usr=2.30%, sys=4.90%, ctx=4950, majf=0, minf=1 00:32:47.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 issued rwts: total=2389,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:47.238 job3: (groupid=0, jobs=1): err= 0: pid=768648: Mon Oct 14 16:57:51 2024 00:32:47.238 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:32:47.238 slat (nsec): min=9827, max=23485, avg=21734.68, stdev=2908.20 00:32:47.238 clat (usec): min=40380, max=41099, avg=40944.67, stdev=137.36 00:32:47.238 lat (usec): min=40390, max=41117, avg=40966.41, stdev=139.55 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:47.238 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:47.238 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:47.238 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:47.238 | 99.99th=[41157] 00:32:47.238 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:32:47.238 slat (nsec): min=9454, max=39637, avg=10562.92, stdev=1910.53 00:32:47.238 clat (usec): min=125, max=346, avg=194.75, stdev=22.32 00:32:47.238 lat (usec): min=136, max=356, avg=205.31, stdev=22.37 00:32:47.238 clat percentiles (usec): 00:32:47.238 | 1.00th=[ 129], 5.00th=[ 145], 10.00th=[ 176], 20.00th=[ 184], 00:32:47.238 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:32:47.238 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 217], 95.00th=[ 221], 00:32:47.238 | 99.00th=[ 231], 99.50th=[ 277], 99.90th=[ 347], 99.95th=[ 347], 00:32:47.238 | 99.99th=[ 347] 00:32:47.238 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:32:47.238 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:47.238 lat (usec) : 250=95.32%, 500=0.56% 00:32:47.238 lat (msec) : 50=4.12% 00:32:47.238 cpu : usr=0.30%, sys=0.40%, ctx=534, majf=0, minf=2 00:32:47.238 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:47.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.238 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.238 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:47.238 00:32:47.238 Run status group 0 (all jobs): 00:32:47.238 READ: bw=9492KiB/s (9720kB/s), 87.3KiB/s-9546KiB/s (89.4kB/s-9776kB/s), io=9824KiB (10.1MB), run=1001-1035msec 00:32:47.238 WRITE: bw=15.5MiB/s (16.2MB/s), 1979KiB/s-9.99MiB/s (2026kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1035msec 00:32:47.238 00:32:47.238 Disk stats (read/write): 00:32:47.238 nvme0n1: ios=44/512, merge=0/0, ticks=1595/88, in_queue=1683, util=85.97% 00:32:47.238 nvme0n2: ios=67/512, merge=0/0, ticks=1297/87, in_queue=1384, util=90.04% 00:32:47.238 nvme0n3: ios=2072/2192, merge=0/0, ticks=1370/335, in_queue=1705, util=93.55% 00:32:47.238 nvme0n4: ios=75/512, merge=0/0, ticks=803/98, in_queue=901, util=95.38% 00:32:47.239 16:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:47.239 [global] 00:32:47.239 thread=1 00:32:47.239 invalidate=1 00:32:47.239 rw=randwrite 00:32:47.239 time_based=1 00:32:47.239 runtime=1 00:32:47.239 ioengine=libaio 00:32:47.239 direct=1 00:32:47.239 bs=4096 00:32:47.239 iodepth=1 00:32:47.239 norandommap=0 00:32:47.239 numjobs=1 00:32:47.239 00:32:47.239 verify_dump=1 00:32:47.239 verify_backlog=512 00:32:47.239 verify_state_save=0 00:32:47.239 do_verify=1 00:32:47.239 verify=crc32c-intel 00:32:47.239 [job0] 00:32:47.239 filename=/dev/nvme0n1 00:32:47.239 [job1] 00:32:47.239 filename=/dev/nvme0n2 00:32:47.239 [job2] 00:32:47.239 filename=/dev/nvme0n3 00:32:47.239 [job3] 00:32:47.239 filename=/dev/nvme0n4 00:32:47.239 Could not set queue depth (nvme0n1) 00:32:47.239 Could not set queue depth (nvme0n2) 00:32:47.239 Could not set queue depth (nvme0n3) 00:32:47.239 Could not set queue depth (nvme0n4) 00:32:47.497 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.497 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.497 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.497 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.497 fio-3.35 00:32:47.497 Starting 4 threads 00:32:48.875 00:32:48.875 job0: (groupid=0, jobs=1): err= 0: pid=769018: Mon Oct 14 16:57:53 2024 00:32:48.875 read: IOPS=39, BW=159KiB/s (163kB/s)(160KiB/1005msec) 00:32:48.875 slat (nsec): min=7632, max=27336, avg=15651.63, stdev=7112.64 00:32:48.875 clat (usec): min=229, max=41968, avg=22681.97, stdev=20545.43 00:32:48.875 lat (usec): min=237, max=41992, avg=22697.62, stdev=20549.40 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 229], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 243], 00:32:48.875 | 30.00th=[ 251], 40.00th=[ 289], 50.00th=[40633], 60.00th=[41157], 00:32:48.875 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:48.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:48.875 | 99.99th=[42206] 00:32:48.875 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:48.875 slat (nsec): min=9744, max=39184, avg=12729.45, stdev=2111.76 00:32:48.875 clat (usec): min=132, max=382, avg=173.02, stdev=19.56 00:32:48.875 lat (usec): min=145, max=392, avg=185.75, stdev=20.06 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:32:48.875 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:32:48.875 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:32:48.875 | 99.00th=[ 215], 99.50th=[ 258], 99.90th=[ 383], 99.95th=[ 383], 00:32:48.875 | 99.99th=[ 383] 00:32:48.875 bw ( KiB/s): min= 4096, max= 4096, per=22.87%, avg=4096.00, stdev= 0.00, samples=1 00:32:48.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:48.875 lat (usec) : 250=94.20%, 500=1.81% 00:32:48.875 lat (msec) : 50=3.99% 00:32:48.875 cpu : usr=0.50%, sys=1.00%, ctx=552, majf=0, minf=1 00:32:48.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:48.875 job1: (groupid=0, jobs=1): err= 0: pid=769019: Mon Oct 14 16:57:53 2024 00:32:48.875 read: IOPS=2173, BW=8695KiB/s (8904kB/s)(8704KiB/1001msec) 00:32:48.875 slat (nsec): min=6777, max=37897, avg=7857.22, stdev=1249.37 00:32:48.875 clat (usec): min=174, max=296, avg=228.15, stdev=22.66 00:32:48.875 lat (usec): min=184, max=306, avg=236.01, stdev=22.62 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 206], 00:32:48.875 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 243], 00:32:48.875 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:32:48.875 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 285], 99.95th=[ 285], 00:32:48.875 | 99.99th=[ 297] 00:32:48.875 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:48.875 slat (nsec): min=9667, max=61624, avg=10898.04, stdev=1710.85 00:32:48.875 clat (usec): min=121, max=318, avg=173.68, stdev=44.61 00:32:48.875 lat (usec): min=131, max=379, avg=184.58, stdev=44.77 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:32:48.875 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 157], 60.00th=[ 165], 00:32:48.875 | 70.00th=[ 182], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 247], 00:32:48.875 | 99.00th=[ 255], 99.50th=[ 258], 99.90th=[ 310], 99.95th=[ 314], 00:32:48.875 | 99.99th=[ 318] 00:32:48.875 bw ( KiB/s): min=11080, max=11080, per=61.86%, avg=11080.00, stdev= 0.00, samples=1 00:32:48.875 iops : min= 2770, max= 2770, avg=2770.00, stdev= 0.00, samples=1 00:32:48.875 lat (usec) : 250=93.20%, 500=6.80% 00:32:48.875 cpu : usr=4.00%, sys=7.20%, ctx=4737, majf=0, minf=1 00:32:48.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 issued rwts: total=2176,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:48.875 job2: (groupid=0, jobs=1): err= 0: pid=769020: Mon Oct 14 16:57:53 2024 00:32:48.875 read: IOPS=540, BW=2161KiB/s (2213kB/s)(2224KiB/1029msec) 00:32:48.875 slat (nsec): min=6847, max=36026, avg=7919.12, stdev=2536.45 00:32:48.875 clat (usec): min=217, max=41975, avg=1496.21, stdev=7040.43 00:32:48.875 lat (usec): min=224, max=41989, avg=1504.13, stdev=7042.02 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:32:48.875 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:32:48.875 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 262], 00:32:48.875 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:48.875 | 99.99th=[42206] 00:32:48.875 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:32:48.875 slat (nsec): min=9657, max=43502, avg=11993.73, stdev=2828.98 00:32:48.875 clat (usec): min=139, max=311, avg=172.05, stdev=17.10 00:32:48.875 lat (usec): min=153, max=328, avg=184.04, stdev=18.13 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:32:48.875 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 174], 00:32:48.875 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:32:48.875 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 297], 99.95th=[ 310], 00:32:48.875 | 99.99th=[ 310] 00:32:48.875 bw ( KiB/s): min= 8192, max= 8192, per=45.73%, avg=8192.00, stdev= 0.00, samples=1 00:32:48.875 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:48.875 lat (usec) : 250=88.61%, 500=10.32% 00:32:48.875 lat (msec) : 50=1.08% 00:32:48.875 cpu : usr=0.88%, sys=1.56%, ctx=1583, majf=0, minf=1 00:32:48.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 issued rwts: total=556,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:48.875 job3: (groupid=0, jobs=1): err= 0: pid=769021: Mon Oct 14 16:57:53 2024 00:32:48.875 read: IOPS=62, BW=252KiB/s (258kB/s)(256KiB/1016msec) 00:32:48.875 slat (nsec): min=7150, max=23929, avg=12670.02, stdev=6526.41 00:32:48.875 clat (usec): min=223, max=42464, avg=14272.68, stdev=19541.78 00:32:48.875 lat (usec): min=231, max=42473, avg=14285.35, stdev=19539.68 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 225], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 233], 00:32:48.875 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:32:48.875 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:48.875 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:48.875 | 99.99th=[42206] 00:32:48.875 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:32:48.875 slat (nsec): min=9499, max=38447, avg=12345.97, stdev=2835.94 00:32:48.875 clat (usec): min=136, max=386, avg=182.76, stdev=27.46 00:32:48.875 lat (usec): min=151, max=399, avg=195.11, stdev=27.67 00:32:48.875 clat percentiles (usec): 00:32:48.875 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:32:48.875 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:32:48.875 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 239], 95.00th=[ 243], 00:32:48.875 | 99.00th=[ 249], 99.50th=[ 289], 99.90th=[ 388], 99.95th=[ 388], 00:32:48.875 | 99.99th=[ 388] 00:32:48.875 bw ( KiB/s): min= 4096, max= 4096, per=22.87%, avg=4096.00, stdev= 0.00, samples=1 00:32:48.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:48.875 lat (usec) : 250=93.23%, 500=2.95% 00:32:48.875 lat (msec) : 50=3.82% 00:32:48.875 cpu : usr=0.49%, sys=0.49%, ctx=577, majf=0, minf=1 00:32:48.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:48.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:48.875 issued rwts: total=64,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:48.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:48.875 00:32:48.875 Run status group 0 (all jobs): 00:32:48.875 READ: bw=10.8MiB/s (11.3MB/s), 159KiB/s-8695KiB/s (163kB/s-8904kB/s), io=11.1MiB (11.6MB), run=1001-1029msec 00:32:48.875 WRITE: bw=17.5MiB/s (18.3MB/s), 2016KiB/s-9.99MiB/s (2064kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1029msec 00:32:48.875 00:32:48.875 Disk stats (read/write): 00:32:48.875 nvme0n1: ios=83/512, merge=0/0, ticks=913/82, in_queue=995, util=90.18% 00:32:48.875 nvme0n2: ios=1984/2048, merge=0/0, ticks=438/321, in_queue=759, util=86.90% 00:32:48.875 nvme0n3: ios=608/1024, merge=0/0, ticks=1542/169, in_queue=1711, util=97.80% 00:32:48.875 nvme0n4: ios=85/512, merge=0/0, ticks=1248/88, in_queue=1336, util=99.05% 00:32:48.875 16:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:48.875 [global] 00:32:48.875 thread=1 00:32:48.875 invalidate=1 00:32:48.875 rw=write 00:32:48.875 time_based=1 00:32:48.875 runtime=1 00:32:48.875 ioengine=libaio 00:32:48.875 direct=1 00:32:48.875 bs=4096 00:32:48.875 iodepth=128 00:32:48.875 norandommap=0 00:32:48.875 numjobs=1 00:32:48.875 00:32:48.875 verify_dump=1 00:32:48.875 verify_backlog=512 00:32:48.875 verify_state_save=0 00:32:48.875 do_verify=1 00:32:48.875 verify=crc32c-intel 00:32:48.875 [job0] 00:32:48.875 filename=/dev/nvme0n1 00:32:48.875 [job1] 00:32:48.875 filename=/dev/nvme0n2 00:32:48.875 [job2] 00:32:48.875 filename=/dev/nvme0n3 00:32:48.875 [job3] 00:32:48.875 filename=/dev/nvme0n4 00:32:48.875 Could not set queue depth (nvme0n1) 00:32:48.875 Could not set queue depth (nvme0n2) 00:32:48.875 Could not set queue depth (nvme0n3) 00:32:48.875 Could not set queue depth (nvme0n4) 00:32:48.876 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:48.876 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:48.876 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:48.876 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:48.876 fio-3.35 00:32:48.876 Starting 4 threads 00:32:50.254 00:32:50.254 job0: (groupid=0, jobs=1): err= 0: pid=769389: Mon Oct 14 16:57:54 2024 00:32:50.254 read: IOPS=4947, BW=19.3MiB/s (20.3MB/s)(19.6MiB/1014msec) 00:32:50.254 slat (nsec): min=1067, max=20044k, avg=92236.35, stdev=663967.33 00:32:50.254 clat (usec): min=2248, max=51104, avg=12365.53, stdev=6481.74 00:32:50.254 lat (usec): min=2262, max=51413, avg=12457.77, stdev=6534.66 00:32:50.254 clat percentiles (usec): 00:32:50.254 | 1.00th=[ 3720], 5.00th=[ 6521], 10.00th=[ 8586], 20.00th=[ 9503], 00:32:50.254 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:32:50.255 | 70.00th=[11731], 80.00th=[12780], 90.00th=[17171], 95.00th=[26084], 00:32:50.255 | 99.00th=[38011], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:32:50.255 | 99.99th=[51119] 00:32:50.255 write: IOPS=5049, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1014msec); 0 zone resets 00:32:50.255 slat (nsec): min=1879, max=19127k, avg=91748.50, stdev=673956.59 00:32:50.255 clat (usec): min=1579, max=71752, avg=12219.00, stdev=8904.79 00:32:50.255 lat (usec): min=1590, max=71765, avg=12310.75, stdev=8952.65 00:32:50.255 clat percentiles (usec): 00:32:50.255 | 1.00th=[ 2606], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 8979], 00:32:50.255 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:32:50.255 | 70.00th=[11207], 80.00th=[11731], 90.00th=[15139], 95.00th=[23462], 00:32:50.255 | 99.00th=[64750], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:32:50.255 | 99.99th=[71828] 00:32:50.255 bw ( KiB/s): min=16384, max=24576, per=28.97%, avg=20480.00, stdev=5792.62, samples=2 00:32:50.255 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:32:50.255 lat (msec) : 2=0.32%, 4=1.59%, 10=28.44%, 20=62.26%, 50=5.99% 00:32:50.255 lat (msec) : 100=1.41% 00:32:50.255 cpu : usr=2.96%, sys=4.44%, ctx=564, majf=0, minf=1 00:32:50.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:50.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.255 issued rwts: total=5017,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.255 job1: (groupid=0, jobs=1): err= 0: pid=769390: Mon Oct 14 16:57:54 2024 00:32:50.255 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1007msec) 00:32:50.255 slat (nsec): min=1741, max=15736k, avg=116457.76, stdev=1006614.45 00:32:50.255 clat (usec): min=4203, max=48969, avg=17161.85, stdev=9199.78 00:32:50.255 lat (usec): min=4209, max=56247, avg=17278.31, stdev=9268.47 00:32:50.255 clat percentiles (usec): 00:32:50.255 | 1.00th=[ 4555], 5.00th=[ 6587], 10.00th=[ 7635], 20.00th=[10421], 00:32:50.255 | 30.00th=[11600], 40.00th=[12780], 50.00th=[14222], 60.00th=[16188], 00:32:50.255 | 70.00th=[20055], 80.00th=[25297], 90.00th=[30802], 95.00th=[40109], 00:32:50.255 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:32:50.255 | 99.99th=[49021] 00:32:50.255 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:32:50.255 slat (usec): min=2, max=17347, avg=131.82, stdev=1026.77 00:32:50.255 clat (usec): min=1255, max=144287, avg=18647.54, stdev=21813.59 00:32:50.255 lat (usec): min=1268, max=144300, avg=18779.36, stdev=21952.89 00:32:50.255 clat percentiles (msec): 00:32:50.255 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:32:50.255 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:32:50.255 | 70.00th=[ 16], 80.00th=[ 21], 90.00th=[ 28], 95.00th=[ 63], 00:32:50.255 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:32:50.255 | 99.99th=[ 144] 00:32:50.255 bw ( KiB/s): min=13528, max=15144, per=20.28%, avg=14336.00, stdev=1142.68, samples=2 00:32:50.255 iops : min= 3382, max= 3786, avg=3584.00, stdev=285.67, samples=2 00:32:50.255 lat (msec) : 2=0.10%, 10=27.87%, 20=47.09%, 50=21.75%, 100=1.75% 00:32:50.255 lat (msec) : 250=1.44% 00:32:50.255 cpu : usr=3.08%, sys=4.87%, ctx=206, majf=0, minf=1 00:32:50.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:50.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.255 issued rwts: total=3556,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.255 job2: (groupid=0, jobs=1): err= 0: pid=769391: Mon Oct 14 16:57:54 2024 00:32:50.255 read: IOPS=3084, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1008msec) 00:32:50.255 slat (nsec): min=1791, max=18951k, avg=150027.95, stdev=1067998.82 00:32:50.255 clat (msec): min=6, max=114, avg=17.47, stdev=13.47 00:32:50.255 lat (msec): min=6, max=114, avg=17.62, stdev=13.62 00:32:50.255 clat percentiles (msec): 00:32:50.255 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:32:50.255 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:32:50.255 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 25], 95.00th=[ 37], 00:32:50.255 | 99.00th=[ 89], 99.50th=[ 101], 99.90th=[ 115], 99.95th=[ 115], 00:32:50.255 | 99.99th=[ 115] 00:32:50.255 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:32:50.255 slat (usec): min=2, max=14192, avg=142.66, stdev=975.92 00:32:50.255 clat (msec): min=5, max=118, avg=20.50, stdev=21.77 00:32:50.255 lat (msec): min=5, max=118, avg=20.64, stdev=21.90 00:32:50.255 clat percentiles (msec): 00:32:50.255 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:32:50.255 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 17], 00:32:50.255 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 31], 95.00th=[ 82], 00:32:50.255 | 99.00th=[ 109], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 118], 00:32:50.255 | 99.99th=[ 118] 00:32:50.255 bw ( KiB/s): min=10376, max=17576, per=19.77%, avg=13976.00, stdev=5091.17, samples=2 00:32:50.255 iops : min= 2594, max= 4394, avg=3494.00, stdev=1272.79, samples=2 00:32:50.255 lat (msec) : 10=15.14%, 20=64.34%, 50=14.15%, 100=4.59%, 250=1.79% 00:32:50.255 cpu : usr=2.78%, sys=4.67%, ctx=241, majf=0, minf=1 00:32:50.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:50.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.255 issued rwts: total=3109,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.255 job3: (groupid=0, jobs=1): err= 0: pid=769392: Mon Oct 14 16:57:54 2024 00:32:50.255 read: IOPS=5475, BW=21.4MiB/s (22.4MB/s)(21.6MiB/1009msec) 00:32:50.255 slat (nsec): min=1411, max=10274k, avg=90262.83, stdev=676313.90 00:32:50.255 clat (usec): min=4147, max=22079, avg=11659.87, stdev=3196.31 00:32:50.255 lat (usec): min=5077, max=22086, avg=11750.13, stdev=3231.21 00:32:50.255 clat percentiles (usec): 00:32:50.255 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 8094], 20.00th=[ 8717], 00:32:50.255 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:32:50.255 | 70.00th=[12911], 80.00th=[14484], 90.00th=[16712], 95.00th=[17695], 00:32:50.255 | 99.00th=[19792], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:32:50.255 | 99.99th=[22152] 00:32:50.255 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:32:50.255 slat (usec): min=2, max=9612, avg=82.27, stdev=583.72 00:32:50.255 clat (usec): min=1530, max=21648, avg=11186.98, stdev=2585.36 00:32:50.255 lat (usec): min=1543, max=21668, avg=11269.25, stdev=2594.70 00:32:50.255 clat percentiles (usec): 00:32:50.255 | 1.00th=[ 6063], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8848], 00:32:50.255 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:32:50.255 | 70.00th=[11863], 80.00th=[14484], 90.00th=[15008], 95.00th=[15270], 00:32:50.255 | 99.00th=[15926], 99.50th=[16188], 99.90th=[20841], 99.95th=[21365], 00:32:50.255 | 99.99th=[21627] 00:32:50.255 bw ( KiB/s): min=21536, max=23520, per=31.87%, avg=22528.00, stdev=1402.90, samples=2 00:32:50.255 iops : min= 5384, max= 5880, avg=5632.00, stdev=350.72, samples=2 00:32:50.255 lat (msec) : 2=0.02%, 10=32.60%, 20=66.93%, 50=0.46% 00:32:50.255 cpu : usr=4.66%, sys=8.04%, ctx=373, majf=0, minf=1 00:32:50.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:50.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:50.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:50.255 issued rwts: total=5525,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:50.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:50.255 00:32:50.255 Run status group 0 (all jobs): 00:32:50.255 READ: bw=66.3MiB/s (69.5MB/s), 12.0MiB/s-21.4MiB/s (12.6MB/s-22.4MB/s), io=67.2MiB (70.5MB), run=1007-1014msec 00:32:50.255 WRITE: bw=69.0MiB/s (72.4MB/s), 13.9MiB/s-21.8MiB/s (14.6MB/s-22.9MB/s), io=70.0MiB (73.4MB), run=1007-1014msec 00:32:50.255 00:32:50.255 Disk stats (read/write): 00:32:50.255 nvme0n1: ios=4132/4325, merge=0/0, ticks=24587/22707, in_queue=47294, util=98.00% 00:32:50.255 nvme0n2: ios=3076/3187, merge=0/0, ticks=45979/57239, in_queue=103218, util=86.99% 00:32:50.255 nvme0n3: ios=2595/3072, merge=0/0, ticks=41852/63058, in_queue=104910, util=95.42% 00:32:50.255 nvme0n4: ios=4666/4792, merge=0/0, ticks=53131/51834, in_queue=104965, util=98.11% 00:32:50.255 16:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:50.255 [global] 00:32:50.255 thread=1 00:32:50.255 invalidate=1 00:32:50.255 rw=randwrite 00:32:50.255 time_based=1 00:32:50.255 runtime=1 00:32:50.255 ioengine=libaio 00:32:50.255 direct=1 00:32:50.255 bs=4096 00:32:50.255 iodepth=128 00:32:50.255 norandommap=0 00:32:50.255 numjobs=1 00:32:50.255 00:32:50.255 verify_dump=1 00:32:50.255 verify_backlog=512 00:32:50.255 verify_state_save=0 00:32:50.255 do_verify=1 00:32:50.255 verify=crc32c-intel 00:32:50.255 [job0] 00:32:50.255 filename=/dev/nvme0n1 00:32:50.255 [job1] 00:32:50.255 filename=/dev/nvme0n2 00:32:50.255 [job2] 00:32:50.255 filename=/dev/nvme0n3 00:32:50.255 [job3] 00:32:50.255 filename=/dev/nvme0n4 00:32:50.255 Could not set queue depth (nvme0n1) 00:32:50.255 Could not set queue depth (nvme0n2) 00:32:50.255 Could not set queue depth (nvme0n3) 00:32:50.255 Could not set queue depth (nvme0n4) 00:32:50.514 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:50.514 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:50.514 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:50.514 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:50.514 fio-3.35 00:32:50.514 Starting 4 threads 00:32:51.892 00:32:51.892 job0: (groupid=0, jobs=1): err= 0: pid=769767: Mon Oct 14 16:57:56 2024 00:32:51.892 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:32:51.892 slat (nsec): min=1023, max=9609.3k, avg=82220.20, stdev=524829.12 00:32:51.892 clat (usec): min=1470, max=31334, avg=10941.22, stdev=3882.11 00:32:51.892 lat (usec): min=1480, max=34479, avg=11023.44, stdev=3909.14 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 2057], 5.00th=[ 4293], 10.00th=[ 7439], 20.00th=[ 8586], 00:32:51.892 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:32:51.892 | 70.00th=[11600], 80.00th=[13173], 90.00th=[15795], 95.00th=[17695], 00:32:51.892 | 99.00th=[22938], 99.50th=[29230], 99.90th=[31327], 99.95th=[31327], 00:32:51.892 | 99.99th=[31327] 00:32:51.892 write: IOPS=5378, BW=21.0MiB/s (22.0MB/s)(21.2MiB/1008msec); 0 zone resets 00:32:51.892 slat (nsec): min=1714, max=20692k, avg=97709.09, stdev=752561.46 00:32:51.892 clat (usec): min=735, max=60355, avg=12958.43, stdev=9033.01 00:32:51.892 lat (usec): min=758, max=60384, avg=13056.14, stdev=9102.46 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 2769], 5.00th=[ 5276], 10.00th=[ 7046], 20.00th=[ 8848], 00:32:51.892 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10290], 60.00th=[10683], 00:32:51.892 | 70.00th=[11469], 80.00th=[12780], 90.00th=[20317], 95.00th=[38536], 00:32:51.892 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:32:51.892 | 99.99th=[60556] 00:32:51.892 bw ( KiB/s): min=19376, max=22984, per=28.49%, avg=21180.00, stdev=2551.24, samples=2 00:32:51.892 iops : min= 4844, max= 5746, avg=5295.00, stdev=637.81, samples=2 00:32:51.892 lat (usec) : 750=0.01%, 1000=0.07% 00:32:51.892 lat (msec) : 2=0.66%, 4=2.56%, 10=31.49%, 20=58.73%, 50=6.29% 00:32:51.892 lat (msec) : 100=0.19% 00:32:51.892 cpu : usr=2.78%, sys=4.57%, ctx=522, majf=0, minf=1 00:32:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:51.892 issued rwts: total=5120,5422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:51.892 job1: (groupid=0, jobs=1): err= 0: pid=769768: Mon Oct 14 16:57:56 2024 00:32:51.892 read: IOPS=4089, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1002msec) 00:32:51.892 slat (nsec): min=1185, max=15419k, avg=93516.57, stdev=839007.81 00:32:51.892 clat (usec): min=560, max=39692, avg=14360.33, stdev=6475.57 00:32:51.892 lat (usec): min=2074, max=39700, avg=14453.84, stdev=6515.55 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 4555], 5.00th=[ 6849], 10.00th=[ 7963], 20.00th=[ 8979], 00:32:51.892 | 30.00th=[10028], 40.00th=[11600], 50.00th=[13304], 60.00th=[14091], 00:32:51.892 | 70.00th=[16909], 80.00th=[19530], 90.00th=[24249], 95.00th=[27657], 00:32:51.892 | 99.00th=[32900], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:32:51.892 | 99.99th=[39584] 00:32:51.892 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:32:51.892 slat (nsec): min=1983, max=18161k, avg=109214.77, stdev=833915.15 00:32:51.892 clat (usec): min=525, max=85225, avg=14824.00, stdev=11695.98 00:32:51.892 lat (usec): min=594, max=85231, avg=14933.21, stdev=11751.81 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 3654], 5.00th=[ 6128], 10.00th=[ 7439], 20.00th=[ 8094], 00:32:51.892 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11207], 60.00th=[13304], 00:32:51.892 | 70.00th=[15926], 80.00th=[19268], 90.00th=[24249], 95.00th=[32375], 00:32:51.892 | 99.00th=[78119], 99.50th=[83362], 99.90th=[85459], 99.95th=[85459], 00:32:51.892 | 99.99th=[85459] 00:32:51.892 bw ( KiB/s): min=16384, max=16384, per=22.04%, avg=16384.00, stdev= 0.00, samples=1 00:32:51.892 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:32:51.892 lat (usec) : 750=0.03% 00:32:51.892 lat (msec) : 2=0.23%, 4=0.82%, 10=33.16%, 20=48.00%, 50=16.30% 00:32:51.892 lat (msec) : 100=1.46% 00:32:51.892 cpu : usr=3.40%, sys=5.00%, ctx=312, majf=0, minf=1 00:32:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:51.892 issued rwts: total=4098,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:51.892 job2: (groupid=0, jobs=1): err= 0: pid=769769: Mon Oct 14 16:57:56 2024 00:32:51.892 read: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1007msec) 00:32:51.892 slat (nsec): min=1726, max=18057k, avg=134126.77, stdev=1051297.63 00:32:51.892 clat (usec): min=1925, max=71283, avg=17616.94, stdev=8997.71 00:32:51.892 lat (usec): min=1932, max=71291, avg=17751.07, stdev=9061.70 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 8586], 20.00th=[10421], 00:32:51.892 | 30.00th=[12649], 40.00th=[13829], 50.00th=[17171], 60.00th=[19268], 00:32:51.892 | 70.00th=[20579], 80.00th=[22152], 90.00th=[28705], 95.00th=[31065], 00:32:51.892 | 99.00th=[57410], 99.50th=[64226], 99.90th=[70779], 99.95th=[70779], 00:32:51.892 | 99.99th=[70779] 00:32:51.892 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:32:51.892 slat (usec): min=2, max=14764, avg=136.27, stdev=851.63 00:32:51.892 clat (usec): min=1581, max=71294, avg=19881.59, stdev=15918.36 00:32:51.892 lat (usec): min=1595, max=71303, avg=20017.87, stdev=16030.36 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 2008], 5.00th=[ 7242], 10.00th=[ 8848], 20.00th=[10552], 00:32:51.892 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12649], 60.00th=[16319], 00:32:51.892 | 70.00th=[17957], 80.00th=[24249], 90.00th=[53740], 95.00th=[59507], 00:32:51.892 | 99.00th=[65274], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:32:51.892 | 99.99th=[70779] 00:32:51.892 bw ( KiB/s): min=12272, max=16384, per=19.27%, avg=14328.00, stdev=2907.62, samples=2 00:32:51.892 iops : min= 3068, max= 4096, avg=3582.00, stdev=726.91, samples=2 00:32:51.892 lat (msec) : 2=0.65%, 4=0.41%, 10=16.43%, 20=53.21%, 50=22.07% 00:32:51.892 lat (msec) : 100=7.23% 00:32:51.892 cpu : usr=2.19%, sys=4.08%, ctx=286, majf=0, minf=1 00:32:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:51.892 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:51.892 job3: (groupid=0, jobs=1): err= 0: pid=769770: Mon Oct 14 16:57:56 2024 00:32:51.892 read: IOPS=4810, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1007msec) 00:32:51.892 slat (nsec): min=1716, max=14601k, avg=98463.04, stdev=560984.85 00:32:51.892 clat (usec): min=1547, max=27175, avg=12470.25, stdev=2552.88 00:32:51.892 lat (usec): min=7521, max=27180, avg=12568.71, stdev=2567.87 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11076], 00:32:51.892 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12387], 00:32:51.892 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15401], 95.00th=[16581], 00:32:51.892 | 99.00th=[22938], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:32:51.892 | 99.99th=[27132] 00:32:51.892 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:32:51.892 slat (usec): min=2, max=22996, avg=97.37, stdev=619.96 00:32:51.892 clat (usec): min=6953, max=50795, avg=13059.57, stdev=4565.66 00:32:51.892 lat (usec): min=6958, max=50819, avg=13156.94, stdev=4614.50 00:32:51.892 clat percentiles (usec): 00:32:51.892 | 1.00th=[ 8455], 5.00th=[10290], 10.00th=[10945], 20.00th=[11076], 00:32:51.892 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:32:51.892 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15270], 95.00th=[16909], 00:32:51.892 | 99.00th=[39584], 99.50th=[39584], 99.90th=[41157], 99.95th=[41157], 00:32:51.892 | 99.99th=[50594] 00:32:51.892 bw ( KiB/s): min=18784, max=22176, per=27.55%, avg=20480.00, stdev=2398.51, samples=2 00:32:51.892 iops : min= 4696, max= 5544, avg=5120.00, stdev=599.63, samples=2 00:32:51.892 lat (msec) : 2=0.01%, 10=6.25%, 20=90.75%, 50=2.98%, 100=0.01% 00:32:51.892 cpu : usr=3.28%, sys=6.26%, ctx=551, majf=0, minf=1 00:32:51.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:51.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:51.892 issued rwts: total=4844,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:51.892 00:32:51.892 Run status group 0 (all jobs): 00:32:51.892 READ: bw=66.9MiB/s (70.1MB/s), 12.4MiB/s-19.8MiB/s (13.0MB/s-20.8MB/s), io=67.4MiB (70.7MB), run=1002-1008msec 00:32:51.892 WRITE: bw=72.6MiB/s (76.1MB/s), 13.9MiB/s-21.0MiB/s (14.6MB/s-22.0MB/s), io=73.2MiB (76.7MB), run=1002-1008msec 00:32:51.892 00:32:51.892 Disk stats (read/write): 00:32:51.892 nvme0n1: ios=4142/4143, merge=0/0, ticks=24700/34017, in_queue=58717, util=84.57% 00:32:51.892 nvme0n2: ios=3072/3431, merge=0/0, ticks=45711/52468, in_queue=98179, util=85.06% 00:32:51.892 nvme0n3: ios=3129/3231, merge=0/0, ticks=49035/47784, in_queue=96819, util=95.97% 00:32:51.892 nvme0n4: ios=4113/4345, merge=0/0, ticks=25620/26145, in_queue=51765, util=95.92% 00:32:51.892 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:51.892 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=769996 00:32:51.892 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:51.892 16:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:51.892 [global] 00:32:51.892 thread=1 00:32:51.892 invalidate=1 00:32:51.892 rw=read 00:32:51.892 time_based=1 00:32:51.892 runtime=10 00:32:51.892 ioengine=libaio 00:32:51.892 direct=1 00:32:51.892 bs=4096 00:32:51.892 iodepth=1 00:32:51.892 norandommap=1 00:32:51.892 numjobs=1 00:32:51.892 00:32:51.892 [job0] 00:32:51.892 filename=/dev/nvme0n1 00:32:51.892 [job1] 00:32:51.892 filename=/dev/nvme0n2 00:32:51.892 [job2] 00:32:51.892 filename=/dev/nvme0n3 00:32:51.893 [job3] 00:32:51.893 filename=/dev/nvme0n4 00:32:51.893 Could not set queue depth (nvme0n1) 00:32:51.893 Could not set queue depth (nvme0n2) 00:32:51.893 Could not set queue depth (nvme0n3) 00:32:51.893 Could not set queue depth (nvme0n4) 00:32:52.151 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.151 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.151 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.151 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.151 fio-3.35 00:32:52.151 Starting 4 threads 00:32:54.682 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:54.941 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42102784, buflen=4096 00:32:54.941 fio: pid=770138, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:54.941 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:55.199 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:55.199 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:55.199 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=299008, buflen=4096 00:32:55.199 fio: pid=770137, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:55.457 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:55.457 16:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:55.457 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:32:55.457 fio: pid=770135, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:55.716 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=32636928, buflen=4096 00:32:55.716 fio: pid=770136, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:55.716 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:55.716 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:55.716 00:32:55.716 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770135: Mon Oct 14 16:58:00 2024 00:32:55.716 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(304KiB/3134msec) 00:32:55.716 slat (usec): min=11, max=19681, avg=461.29, stdev=2724.37 00:32:55.716 clat (usec): min=461, max=42194, avg=40481.27, stdev=4657.15 00:32:55.716 lat (usec): min=497, max=61876, avg=40765.81, stdev=5259.22 00:32:55.716 clat percentiles (usec): 00:32:55.716 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:55.716 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:55.716 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:55.716 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:55.716 | 99.99th=[42206] 00:32:55.716 bw ( KiB/s): min= 96, max= 104, per=0.44%, avg=97.83, stdev= 3.25, samples=6 00:32:55.716 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:32:55.716 lat (usec) : 500=1.30% 00:32:55.716 lat (msec) : 50=97.40% 00:32:55.716 cpu : usr=0.00%, sys=0.13%, ctx=81, majf=0, minf=1 00:32:55.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.717 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770136: Mon Oct 14 16:58:00 2024 00:32:55.717 read: IOPS=2375, BW=9503KiB/s (9731kB/s)(31.1MiB/3354msec) 00:32:55.717 slat (usec): min=5, max=12542, avg=10.22, stdev=182.31 00:32:55.717 clat (usec): min=182, max=44485, avg=406.73, stdev=2735.11 00:32:55.717 lat (usec): min=189, max=53736, avg=415.65, stdev=2762.85 00:32:55.717 clat percentiles (usec): 00:32:55.717 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:32:55.717 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:32:55.717 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 247], 95.00th=[ 260], 00:32:55.717 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[41157], 99.95th=[42206], 00:32:55.717 | 99.99th=[44303] 00:32:55.717 bw ( KiB/s): min= 144, max=17920, per=45.68%, avg=10022.50, stdev=8482.31, samples=6 00:32:55.717 iops : min= 36, max= 4480, avg=2505.50, stdev=2120.70, samples=6 00:32:55.717 lat (usec) : 250=92.60%, 500=6.94% 00:32:55.717 lat (msec) : 50=0.45% 00:32:55.717 cpu : usr=0.30%, sys=2.86%, ctx=7975, majf=0, minf=2 00:32:55.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 issued rwts: total=7969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.717 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770137: Mon Oct 14 16:58:00 2024 00:32:55.717 read: IOPS=25, BW=99.4KiB/s (102kB/s)(292KiB/2939msec) 00:32:55.717 slat (usec): min=13, max=1742, avg=46.55, stdev=199.86 00:32:55.717 clat (usec): min=347, max=42304, avg=39917.06, stdev=6676.31 00:32:55.717 lat (usec): min=371, max=44047, avg=39964.01, stdev=6687.17 00:32:55.717 clat percentiles (usec): 00:32:55.717 | 1.00th=[ 347], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:55.717 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:55.717 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:55.717 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:55.717 | 99.99th=[42206] 00:32:55.717 bw ( KiB/s): min= 96, max= 104, per=0.45%, avg=99.20, stdev= 4.38, samples=5 00:32:55.717 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:32:55.717 lat (usec) : 500=1.35%, 750=1.35% 00:32:55.717 lat (msec) : 50=95.95% 00:32:55.717 cpu : usr=0.14%, sys=0.00%, ctx=75, majf=0, minf=2 00:32:55.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.717 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770138: Mon Oct 14 16:58:00 2024 00:32:55.717 read: IOPS=3780, BW=14.8MiB/s (15.5MB/s)(40.2MiB/2719msec) 00:32:55.717 slat (nsec): min=6328, max=32598, avg=7474.30, stdev=893.57 00:32:55.717 clat (usec): min=200, max=515, avg=253.42, stdev=25.72 00:32:55.717 lat (usec): min=207, max=522, avg=260.89, stdev=25.76 00:32:55.717 clat percentiles (usec): 00:32:55.717 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 243], 00:32:55.717 | 30.00th=[ 247], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:32:55.717 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 273], 00:32:55.717 | 99.00th=[ 437], 99.50th=[ 445], 99.90th=[ 474], 99.95th=[ 486], 00:32:55.717 | 99.99th=[ 515] 00:32:55.717 bw ( KiB/s): min=15288, max=15600, per=70.39%, avg=15443.20, stdev=114.63, samples=5 00:32:55.717 iops : min= 3822, max= 3900, avg=3860.80, stdev=28.66, samples=5 00:32:55.717 lat (usec) : 250=52.79%, 500=47.17%, 750=0.03% 00:32:55.717 cpu : usr=0.81%, sys=3.75%, ctx=10281, majf=0, minf=2 00:32:55.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.717 issued rwts: total=10280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.717 00:32:55.717 Run status group 0 (all jobs): 00:32:55.717 READ: bw=21.4MiB/s (22.5MB/s), 97.0KiB/s-14.8MiB/s (99.3kB/s-15.5MB/s), io=71.9MiB (75.3MB), run=2719-3354msec 00:32:55.717 00:32:55.717 Disk stats (read/write): 00:32:55.717 nvme0n1: ios=115/0, merge=0/0, ticks=4036/0, in_queue=4036, util=98.67% 00:32:55.717 nvme0n2: ios=7603/0, merge=0/0, ticks=3015/0, in_queue=3015, util=96.16% 00:32:55.717 nvme0n3: ios=71/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.49% 00:32:55.717 nvme0n4: ios=10042/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.45% 00:32:55.717 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:55.717 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:55.975 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:55.975 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:56.233 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:56.233 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:56.490 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:56.490 16:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 769996 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:56.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:56.749 nvmf hotplug test: fio failed as expected 00:32:56.749 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.008 rmmod nvme_tcp 00:32:57.008 rmmod nvme_fabrics 00:32:57.008 rmmod nvme_keyring 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 767522 ']' 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 767522 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 767522 ']' 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 767522 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 767522 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 767522' 00:32:57.008 killing process with pid 767522 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 767522 00:32:57.008 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 767522 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.267 16:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.802 00:32:59.802 real 0m25.652s 00:32:59.802 user 1m30.106s 00:32:59.802 sys 0m11.312s 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:59.802 ************************************ 00:32:59.802 END TEST nvmf_fio_target 00:32:59.802 ************************************ 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.802 ************************************ 00:32:59.802 START TEST nvmf_bdevio 00:32:59.802 ************************************ 00:32:59.802 16:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:59.802 * Looking for test storage... 00:32:59.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:59.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.802 --rc genhtml_branch_coverage=1 00:32:59.802 --rc genhtml_function_coverage=1 00:32:59.802 --rc genhtml_legend=1 00:32:59.802 --rc geninfo_all_blocks=1 00:32:59.802 --rc geninfo_unexecuted_blocks=1 00:32:59.802 00:32:59.802 ' 00:32:59.802 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:59.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.802 --rc genhtml_branch_coverage=1 00:32:59.802 --rc genhtml_function_coverage=1 00:32:59.802 --rc genhtml_legend=1 00:32:59.802 --rc geninfo_all_blocks=1 00:32:59.802 --rc geninfo_unexecuted_blocks=1 00:32:59.802 00:32:59.802 ' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.803 --rc genhtml_branch_coverage=1 00:32:59.803 --rc genhtml_function_coverage=1 00:32:59.803 --rc genhtml_legend=1 00:32:59.803 --rc geninfo_all_blocks=1 00:32:59.803 --rc geninfo_unexecuted_blocks=1 00:32:59.803 00:32:59.803 ' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:59.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.803 --rc genhtml_branch_coverage=1 00:32:59.803 --rc genhtml_function_coverage=1 00:32:59.803 --rc genhtml_legend=1 00:32:59.803 --rc geninfo_all_blocks=1 00:32:59.803 --rc geninfo_unexecuted_blocks=1 00:32:59.803 00:32:59.803 ' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.803 16:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:06.376 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:06.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:06.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:06.377 Found net devices under 0000:86:00.0: cvl_0_0 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:06.377 Found net devices under 0000:86:00.1: cvl_0_1 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:06.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:33:06.377 00:33:06.377 --- 10.0.0.2 ping statistics --- 00:33:06.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.377 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:33:06.377 00:33:06.377 --- 10.0.0.1 ping statistics --- 00:33:06.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.377 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:06.377 16:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=774431 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 774431 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 774431 ']' 00:33:06.377 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 [2024-10-14 16:58:10.074187] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:06.378 [2024-10-14 16:58:10.075181] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:33:06.378 [2024-10-14 16:58:10.075219] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.378 [2024-10-14 16:58:10.147031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:06.378 [2024-10-14 16:58:10.190404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:06.378 [2024-10-14 16:58:10.190440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:06.378 [2024-10-14 16:58:10.190447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:06.378 [2024-10-14 16:58:10.190453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:06.378 [2024-10-14 16:58:10.190458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:06.378 [2024-10-14 16:58:10.192085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:06.378 [2024-10-14 16:58:10.192194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:06.378 [2024-10-14 16:58:10.192298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:06.378 [2024-10-14 16:58:10.192299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:06.378 [2024-10-14 16:58:10.259983] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:06.378 [2024-10-14 16:58:10.261266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:06.378 [2024-10-14 16:58:10.261300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:06.378 [2024-10-14 16:58:10.261933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:06.378 [2024-10-14 16:58:10.261970] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 [2024-10-14 16:58:10.329088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 Malloc0 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:06.378 [2024-10-14 16:58:10.413292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:06.378 { 00:33:06.378 "params": { 00:33:06.378 "name": "Nvme$subsystem", 00:33:06.378 "trtype": "$TEST_TRANSPORT", 00:33:06.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:06.378 "adrfam": "ipv4", 00:33:06.378 "trsvcid": "$NVMF_PORT", 00:33:06.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:06.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:06.378 "hdgst": ${hdgst:-false}, 00:33:06.378 "ddgst": ${ddgst:-false} 00:33:06.378 }, 00:33:06.378 "method": "bdev_nvme_attach_controller" 00:33:06.378 } 00:33:06.378 EOF 00:33:06.378 )") 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:33:06.378 16:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:06.378 "params": { 00:33:06.378 "name": "Nvme1", 00:33:06.378 "trtype": "tcp", 00:33:06.378 "traddr": "10.0.0.2", 00:33:06.378 "adrfam": "ipv4", 00:33:06.378 "trsvcid": "4420", 00:33:06.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:06.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:06.378 "hdgst": false, 00:33:06.378 "ddgst": false 00:33:06.378 }, 00:33:06.378 "method": "bdev_nvme_attach_controller" 00:33:06.378 }' 00:33:06.378 [2024-10-14 16:58:10.465397] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:33:06.378 [2024-10-14 16:58:10.465451] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774620 ] 00:33:06.378 [2024-10-14 16:58:10.534020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:06.378 [2024-10-14 16:58:10.577888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.378 [2024-10-14 16:58:10.577996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.378 [2024-10-14 16:58:10.577996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:06.378 I/O targets: 00:33:06.378 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:06.378 00:33:06.378 00:33:06.378 CUnit - A unit testing framework for C - Version 2.1-3 00:33:06.378 http://cunit.sourceforge.net/ 00:33:06.378 00:33:06.378 00:33:06.378 Suite: bdevio tests on: Nvme1n1 00:33:06.378 Test: blockdev write read block ...passed 00:33:06.378 Test: blockdev write zeroes read block ...passed 00:33:06.378 Test: blockdev write zeroes read no split ...passed 00:33:06.378 Test: blockdev write zeroes read split ...passed 00:33:06.637 Test: blockdev write zeroes read split partial ...passed 00:33:06.637 Test: blockdev reset ...[2024-10-14 16:58:11.040164] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:06.637 [2024-10-14 16:58:11.040229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242e400 (9): Bad file descriptor 00:33:06.637 [2024-10-14 16:58:11.043916] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:06.637 passed 00:33:06.637 Test: blockdev write read 8 blocks ...passed 00:33:06.637 Test: blockdev write read size > 128k ...passed 00:33:06.637 Test: blockdev write read invalid size ...passed 00:33:06.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:06.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:06.637 Test: blockdev write read max offset ...passed 00:33:06.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:06.637 Test: blockdev writev readv 8 blocks ...passed 00:33:06.637 Test: blockdev writev readv 30 x 1block ...passed 00:33:06.637 Test: blockdev writev readv block ...passed 00:33:06.637 Test: blockdev writev readv size > 128k ...passed 00:33:06.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:06.637 Test: blockdev comparev and writev ...[2024-10-14 16:58:11.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.637 [2024-10-14 16:58:11.254595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:06.637 [2024-10-14 16:58:11.254613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.637 [2024-10-14 16:58:11.254621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:06.637 [2024-10-14 16:58:11.254928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.637 [2024-10-14 16:58:11.254938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:06.637 [2024-10-14 16:58:11.254950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.637 [2024-10-14 16:58:11.254956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:06.638 [2024-10-14 16:58:11.255248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.638 [2024-10-14 16:58:11.255259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:06.638 [2024-10-14 16:58:11.255270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.638 [2024-10-14 16:58:11.255277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:06.638 [2024-10-14 16:58:11.255568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.638 [2024-10-14 16:58:11.255578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:06.638 [2024-10-14 16:58:11.255589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:06.638 [2024-10-14 16:58:11.255604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:06.897 passed 00:33:06.897 Test: blockdev nvme passthru rw ...passed 00:33:06.897 Test: blockdev nvme passthru vendor specific ...[2024-10-14 16:58:11.338016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:06.897 [2024-10-14 16:58:11.338030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:06.897 [2024-10-14 16:58:11.338140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:06.897 [2024-10-14 16:58:11.338149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:06.897 [2024-10-14 16:58:11.338264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:06.897 [2024-10-14 16:58:11.338273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:06.897 [2024-10-14 16:58:11.338381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:06.897 [2024-10-14 16:58:11.338391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:06.897 passed 00:33:06.897 Test: blockdev nvme admin passthru ...passed 00:33:06.897 Test: blockdev copy ...passed 00:33:06.897 00:33:06.897 Run Summary: Type Total Ran Passed Failed Inactive 00:33:06.897 suites 1 1 n/a 0 0 00:33:06.897 tests 23 23 23 0 0 00:33:06.897 asserts 152 152 152 0 n/a 00:33:06.897 00:33:06.897 Elapsed time = 1.010 seconds 00:33:06.897 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:06.897 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.897 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.156 rmmod nvme_tcp 00:33:07.156 rmmod nvme_fabrics 00:33:07.156 rmmod nvme_keyring 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 774431 ']' 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 774431 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 774431 ']' 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 774431 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774431 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774431' 00:33:07.156 killing process with pid 774431 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 774431 00:33:07.156 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 774431 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.416 16:58:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.323 16:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.323 00:33:09.323 real 0m9.979s 00:33:09.323 user 0m9.115s 00:33:09.323 sys 0m5.230s 00:33:09.323 16:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.323 16:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.323 ************************************ 00:33:09.323 END TEST nvmf_bdevio 00:33:09.323 ************************************ 00:33:09.323 16:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:09.323 00:33:09.323 real 4m31.997s 00:33:09.323 user 9m4.542s 00:33:09.323 sys 1m52.458s 00:33:09.323 16:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:09.323 16:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:09.323 ************************************ 00:33:09.323 END TEST nvmf_target_core_interrupt_mode 00:33:09.323 ************************************ 00:33:09.583 16:58:13 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:09.583 16:58:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:09.583 16:58:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:09.583 16:58:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:09.583 ************************************ 00:33:09.583 START TEST nvmf_interrupt 00:33:09.583 ************************************ 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:09.583 * Looking for test storage... 00:33:09.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:09.583 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:09.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.584 --rc genhtml_branch_coverage=1 00:33:09.584 --rc genhtml_function_coverage=1 00:33:09.584 --rc genhtml_legend=1 00:33:09.584 --rc geninfo_all_blocks=1 00:33:09.584 --rc geninfo_unexecuted_blocks=1 00:33:09.584 00:33:09.584 ' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:09.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.584 --rc genhtml_branch_coverage=1 00:33:09.584 --rc genhtml_function_coverage=1 00:33:09.584 --rc genhtml_legend=1 00:33:09.584 --rc geninfo_all_blocks=1 00:33:09.584 --rc geninfo_unexecuted_blocks=1 00:33:09.584 00:33:09.584 ' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:09.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.584 --rc genhtml_branch_coverage=1 00:33:09.584 --rc genhtml_function_coverage=1 00:33:09.584 --rc genhtml_legend=1 00:33:09.584 --rc geninfo_all_blocks=1 00:33:09.584 --rc geninfo_unexecuted_blocks=1 00:33:09.584 00:33:09.584 ' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:09.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.584 --rc genhtml_branch_coverage=1 00:33:09.584 --rc genhtml_function_coverage=1 00:33:09.584 --rc genhtml_legend=1 00:33:09.584 --rc geninfo_all_blocks=1 00:33:09.584 --rc geninfo_unexecuted_blocks=1 00:33:09.584 00:33:09.584 ' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.584 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.844 16:58:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:16.418 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:16.418 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:16.419 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:16.419 Found net devices under 0000:86:00.0: cvl_0_0 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:16.419 Found net devices under 0000:86:00.1: cvl_0_1 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.419 16:58:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:33:16.419 00:33:16.419 --- 10.0.0.2 ping statistics --- 00:33:16.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.419 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:33:16.419 00:33:16.419 --- 10.0.0.1 ping statistics --- 00:33:16.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.419 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=778175 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 778175 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 778175 ']' 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.419 [2024-10-14 16:58:20.226288] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.419 [2024-10-14 16:58:20.227206] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:33:16.419 [2024-10-14 16:58:20.227240] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.419 [2024-10-14 16:58:20.287220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:16.419 [2024-10-14 16:58:20.330248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.419 [2024-10-14 16:58:20.330283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.419 [2024-10-14 16:58:20.330291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.419 [2024-10-14 16:58:20.330298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.419 [2024-10-14 16:58:20.330303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.419 [2024-10-14 16:58:20.331480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.419 [2024-10-14 16:58:20.331483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.419 [2024-10-14 16:58:20.397832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.419 [2024-10-14 16:58:20.398485] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:16.419 [2024-10-14 16:58:20.398666] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:16.419 5000+0 records in 00:33:16.419 5000+0 records out 00:33:16.419 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0169752 s, 603 MB/s 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.419 AIO0 00:33:16.419 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.420 [2024-10-14 16:58:20.536295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:16.420 [2024-10-14 16:58:20.576541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 778175 0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 778175 0 idle 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778175 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778175 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 778175 1 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 778175 1 idle 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778195 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778195 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=778435 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 778175 0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 778175 0 busy 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:16.420 16:58:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:16.679 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778175 root 20 0 128.2g 46848 33792 R 26.7 0.0 0:00.27 reactor_0' 00:33:16.679 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778175 root 20 0 128.2g 46848 33792 R 26.7 0.0 0:00.27 reactor_0 00:33:16.679 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.680 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.680 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=26.7 00:33:16.680 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=26 00:33:16.680 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:16.680 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:16.680 16:58:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:33:17.617 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:33:17.617 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:17.617 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:17.617 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778175 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.63 reactor_0' 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778175 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.63 reactor_0 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 778175 1 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 778175 1 busy 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778195 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.38 reactor_1' 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778195 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.38 reactor_1 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:17.876 16:58:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 778435 00:33:27.859 Initializing NVMe Controllers 00:33:27.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:27.859 Controller IO queue size 256, less than required. 00:33:27.859 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:27.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:27.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:27.859 Initialization complete. Launching workers. 00:33:27.859 ======================================================== 00:33:27.859 Latency(us) 00:33:27.859 Device Information : IOPS MiB/s Average min max 00:33:27.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15951.07 62.31 16057.20 4153.13 29169.88 00:33:27.859 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16418.37 64.13 15596.75 7316.13 26190.95 00:33:27.859 ======================================================== 00:33:27.859 Total : 32369.44 126.44 15823.65 4153.13 29169.88 00:33:27.859 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 778175 0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 778175 0 idle 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778175 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0' 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778175 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 778175 1 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 778175 1 idle 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778195 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778195 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:27.859 16:58:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 778175 0 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 778175 0 idle 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:29.766 16:58:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778175 root 20 0 128.2g 72192 33792 R 0.0 0.0 0:20.48 reactor_0' 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778175 root 20 0 128.2g 72192 33792 R 0.0 0.0 0:20.48 reactor_0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 778175 1 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 778175 1 idle 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=778175 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 778175 -w 256 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 778195 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1' 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 778195 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.766 16:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:30.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:30.025 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.026 rmmod nvme_tcp 00:33:30.026 rmmod nvme_fabrics 00:33:30.026 rmmod nvme_keyring 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 778175 ']' 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 778175 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 778175 ']' 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 778175 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778175 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778175' 00:33:30.026 killing process with pid 778175 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 778175 00:33:30.026 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 778175 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:30.285 16:58:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.822 16:58:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:32.822 00:33:32.822 real 0m22.819s 00:33:32.822 user 0m39.543s 00:33:32.822 sys 0m8.595s 00:33:32.822 16:58:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:32.822 16:58:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.822 ************************************ 00:33:32.822 END TEST nvmf_interrupt 00:33:32.822 ************************************ 00:33:32.822 00:33:32.822 real 26m59.839s 00:33:32.822 user 55m40.526s 00:33:32.822 sys 9m9.315s 00:33:32.822 16:58:36 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:32.822 16:58:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.822 ************************************ 00:33:32.822 END TEST nvmf_tcp 00:33:32.822 ************************************ 00:33:32.822 16:58:36 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:33:32.822 16:58:36 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:32.822 16:58:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:32.822 16:58:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:32.822 16:58:36 -- common/autotest_common.sh@10 -- # set +x 00:33:32.822 ************************************ 00:33:32.822 START TEST spdkcli_nvmf_tcp 00:33:32.822 ************************************ 00:33:32.822 16:58:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:32.823 * Looking for test storage... 00:33:32.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:32.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.823 --rc genhtml_branch_coverage=1 00:33:32.823 --rc genhtml_function_coverage=1 00:33:32.823 --rc genhtml_legend=1 00:33:32.823 --rc geninfo_all_blocks=1 00:33:32.823 --rc geninfo_unexecuted_blocks=1 00:33:32.823 00:33:32.823 ' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:32.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.823 --rc genhtml_branch_coverage=1 00:33:32.823 --rc genhtml_function_coverage=1 00:33:32.823 --rc genhtml_legend=1 00:33:32.823 --rc geninfo_all_blocks=1 00:33:32.823 --rc geninfo_unexecuted_blocks=1 00:33:32.823 00:33:32.823 ' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:32.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.823 --rc genhtml_branch_coverage=1 00:33:32.823 --rc genhtml_function_coverage=1 00:33:32.823 --rc genhtml_legend=1 00:33:32.823 --rc geninfo_all_blocks=1 00:33:32.823 --rc geninfo_unexecuted_blocks=1 00:33:32.823 00:33:32.823 ' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:32.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.823 --rc genhtml_branch_coverage=1 00:33:32.823 --rc genhtml_function_coverage=1 00:33:32.823 --rc genhtml_legend=1 00:33:32.823 --rc geninfo_all_blocks=1 00:33:32.823 --rc geninfo_unexecuted_blocks=1 00:33:32.823 00:33:32.823 ' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:32.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=781131 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 781131 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 781131 ']' 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.823 [2024-10-14 16:58:37.195189] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:33:32.823 [2024-10-14 16:58:37.195235] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781131 ] 00:33:32.823 [2024-10-14 16:58:37.255157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:32.823 [2024-10-14 16:58:37.311261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.823 [2024-10-14 16:58:37.311266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.823 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:32.824 16:58:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.083 16:58:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:33.083 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:33.083 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:33.083 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:33.083 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:33.083 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:33.083 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:33.083 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:33.083 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:33.083 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:33.083 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:33.083 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:33.083 ' 00:33:35.618 [2024-10-14 16:58:40.144482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.020 [2024-10-14 16:58:41.480921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:39.553 [2024-10-14 16:58:43.964510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:41.552 [2024-10-14 16:58:46.107152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:43.462 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:43.462 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:43.462 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:43.462 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:43.462 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:43.462 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:43.462 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:43.462 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:43.462 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:43.462 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:43.462 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:43.462 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:43.462 16:58:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:43.720 16:58:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:43.720 16:58:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:43.720 16:58:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:43.720 16:58:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.720 16:58:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.979 16:58:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:43.979 16:58:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:43.979 16:58:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.979 16:58:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:43.979 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:43.979 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:43.979 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:43.979 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:43.979 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:43.979 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:43.979 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:43.979 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:43.979 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:43.979 ' 00:33:49.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:49.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:49.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:49.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:49.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:49.249 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:49.249 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:49.249 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:49.249 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:49.249 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:49.249 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:49.249 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:49.249 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:49.249 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:49.508 16:58:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:49.508 16:58:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.508 16:58:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 781131 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 781131 ']' 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 781131 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 781131 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 781131' 00:33:49.508 killing process with pid 781131 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 781131 00:33:49.508 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 781131 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 781131 ']' 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 781131 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 781131 ']' 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 781131 00:33:49.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (781131) - No such process 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 781131 is not found' 00:33:49.768 Process with pid 781131 is not found 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:49.768 00:33:49.768 real 0m17.287s 00:33:49.768 user 0m38.131s 00:33:49.768 sys 0m0.819s 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:49.768 16:58:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.768 ************************************ 00:33:49.768 END TEST spdkcli_nvmf_tcp 00:33:49.768 ************************************ 00:33:49.768 16:58:54 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:49.768 16:58:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:49.768 16:58:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:49.768 16:58:54 -- common/autotest_common.sh@10 -- # set +x 00:33:49.768 ************************************ 00:33:49.768 START TEST nvmf_identify_passthru 00:33:49.768 ************************************ 00:33:49.768 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:49.768 * Looking for test storage... 00:33:49.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:49.768 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:49.768 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:33:49.768 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:50.028 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:50.028 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:50.028 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.028 --rc genhtml_branch_coverage=1 00:33:50.028 --rc genhtml_function_coverage=1 00:33:50.028 --rc genhtml_legend=1 00:33:50.028 --rc geninfo_all_blocks=1 00:33:50.028 --rc geninfo_unexecuted_blocks=1 00:33:50.028 00:33:50.028 ' 00:33:50.028 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.028 --rc genhtml_branch_coverage=1 00:33:50.028 --rc genhtml_function_coverage=1 00:33:50.028 --rc genhtml_legend=1 00:33:50.028 --rc geninfo_all_blocks=1 00:33:50.028 --rc geninfo_unexecuted_blocks=1 00:33:50.028 00:33:50.028 ' 00:33:50.028 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.028 --rc genhtml_branch_coverage=1 00:33:50.028 --rc genhtml_function_coverage=1 00:33:50.028 --rc genhtml_legend=1 00:33:50.028 --rc geninfo_all_blocks=1 00:33:50.028 --rc geninfo_unexecuted_blocks=1 00:33:50.028 00:33:50.028 ' 00:33:50.028 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:50.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:50.028 --rc genhtml_branch_coverage=1 00:33:50.028 --rc genhtml_function_coverage=1 00:33:50.028 --rc genhtml_legend=1 00:33:50.028 --rc geninfo_all_blocks=1 00:33:50.028 --rc geninfo_unexecuted_blocks=1 00:33:50.028 00:33:50.028 ' 00:33:50.028 16:58:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.028 16:58:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.028 16:58:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.028 16:58:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.028 16:58:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.028 16:58:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:50.028 16:58:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:50.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:50.028 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:50.029 16:58:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:50.029 16:58:54 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:50.029 16:58:54 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:50.029 16:58:54 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:50.029 16:58:54 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:50.029 16:58:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.029 16:58:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.029 16:58:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.029 16:58:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:50.029 16:58:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:50.029 16:58:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.029 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:50.029 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:50.029 16:58:54 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:50.029 16:58:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:56.600 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:56.600 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:56.600 Found net devices under 0000:86:00.0: cvl_0_0 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:56.600 Found net devices under 0000:86:00.1: cvl_0_1 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:56.600 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:56.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:33:56.601 00:33:56.601 --- 10.0.0.2 ping statistics --- 00:33:56.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.601 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:33:56.601 00:33:56.601 --- 10.0.0.1 ping statistics --- 00:33:56.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.601 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:56.601 16:59:00 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:33:56.601 16:59:00 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:56.601 16:59:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:00.789 16:59:05 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:34:00.789 16:59:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:00.789 16:59:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:00.789 16:59:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:06.059 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:06.060 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=788570 00:34:06.060 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:06.060 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:06.060 16:59:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 788570 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 788570 ']' 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:06.060 16:59:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 [2024-10-14 16:59:09.988169] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:34:06.060 [2024-10-14 16:59:09.988216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.060 [2024-10-14 16:59:10.061968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:06.060 [2024-10-14 16:59:10.106518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.060 [2024-10-14 16:59:10.106553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.060 [2024-10-14 16:59:10.106560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.060 [2024-10-14 16:59:10.106566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.060 [2024-10-14 16:59:10.106571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.060 [2024-10-14 16:59:10.108068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:06.060 [2024-10-14 16:59:10.108175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.060 [2024-10-14 16:59:10.108198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:06.060 [2024-10-14 16:59:10.108200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:34:06.060 16:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 INFO: Log level set to 20 00:34:06.060 INFO: Requests: 00:34:06.060 { 00:34:06.060 "jsonrpc": "2.0", 00:34:06.060 "method": "nvmf_set_config", 00:34:06.060 "id": 1, 00:34:06.060 "params": { 00:34:06.060 "admin_cmd_passthru": { 00:34:06.060 "identify_ctrlr": true 00:34:06.060 } 00:34:06.060 } 00:34:06.060 } 00:34:06.060 00:34:06.060 INFO: response: 00:34:06.060 { 00:34:06.060 "jsonrpc": "2.0", 00:34:06.060 "id": 1, 00:34:06.060 "result": true 00:34:06.060 } 00:34:06.060 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.060 16:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 INFO: Setting log level to 20 00:34:06.060 INFO: Setting log level to 20 00:34:06.060 INFO: Log level set to 20 00:34:06.060 INFO: Log level set to 20 00:34:06.060 INFO: Requests: 00:34:06.060 { 00:34:06.060 "jsonrpc": "2.0", 00:34:06.060 "method": "framework_start_init", 00:34:06.060 "id": 1 00:34:06.060 } 00:34:06.060 00:34:06.060 INFO: Requests: 00:34:06.060 { 00:34:06.060 "jsonrpc": "2.0", 00:34:06.060 "method": "framework_start_init", 00:34:06.060 "id": 1 00:34:06.060 } 00:34:06.060 00:34:06.060 [2024-10-14 16:59:10.223435] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:06.060 INFO: response: 00:34:06.060 { 00:34:06.060 "jsonrpc": "2.0", 00:34:06.060 "id": 1, 00:34:06.060 "result": true 00:34:06.060 } 00:34:06.060 00:34:06.060 INFO: response: 00:34:06.060 { 00:34:06.060 "jsonrpc": "2.0", 00:34:06.060 "id": 1, 00:34:06.060 "result": true 00:34:06.060 } 00:34:06.060 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.060 16:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 INFO: Setting log level to 40 00:34:06.060 INFO: Setting log level to 40 00:34:06.060 INFO: Setting log level to 40 00:34:06.060 [2024-10-14 16:59:10.236779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.060 16:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:06.060 16:59:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.060 16:59:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.591 Nvme0n1 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.591 [2024-10-14 16:59:13.148700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.591 [ 00:34:08.591 { 00:34:08.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:08.591 "subtype": "Discovery", 00:34:08.591 "listen_addresses": [], 00:34:08.591 "allow_any_host": true, 00:34:08.591 "hosts": [] 00:34:08.591 }, 00:34:08.591 { 00:34:08.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.591 "subtype": "NVMe", 00:34:08.591 "listen_addresses": [ 00:34:08.591 { 00:34:08.591 "trtype": "TCP", 00:34:08.591 "adrfam": "IPv4", 00:34:08.591 "traddr": "10.0.0.2", 00:34:08.591 "trsvcid": "4420" 00:34:08.591 } 00:34:08.591 ], 00:34:08.591 "allow_any_host": true, 00:34:08.591 "hosts": [], 00:34:08.591 "serial_number": "SPDK00000000000001", 00:34:08.591 "model_number": "SPDK bdev Controller", 00:34:08.591 "max_namespaces": 1, 00:34:08.591 "min_cntlid": 1, 00:34:08.591 "max_cntlid": 65519, 00:34:08.591 "namespaces": [ 00:34:08.591 { 00:34:08.591 "nsid": 1, 00:34:08.591 "bdev_name": "Nvme0n1", 00:34:08.591 "name": "Nvme0n1", 00:34:08.591 "nguid": "C7470031E0D84466B3FA448A9BD1AF53", 00:34:08.591 "uuid": "c7470031-e0d8-4466-b3fa-448a9bd1af53" 00:34:08.591 } 00:34:08.591 ] 00:34:08.591 } 00:34:08.591 ] 00:34:08.591 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:08.591 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:08.851 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:34:08.851 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:08.851 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:08.851 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:09.109 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:09.109 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:34:09.110 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:09.110 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.110 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.110 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.110 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.110 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:09.110 16:59:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:09.110 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:09.110 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:09.110 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.110 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:09.110 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.110 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.110 rmmod nvme_tcp 00:34:09.110 rmmod nvme_fabrics 00:34:09.369 rmmod nvme_keyring 00:34:09.369 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.369 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:09.369 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:09.369 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 788570 ']' 00:34:09.369 16:59:13 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 788570 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 788570 ']' 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 788570 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 788570 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 788570' 00:34:09.369 killing process with pid 788570 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 788570 00:34:09.369 16:59:13 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 788570 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:11.272 16:59:15 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.272 16:59:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:11.272 16:59:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.808 16:59:17 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.808 00:34:13.808 real 0m23.634s 00:34:13.808 user 0m30.577s 00:34:13.808 sys 0m6.303s 00:34:13.808 16:59:17 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.808 16:59:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:13.808 ************************************ 00:34:13.808 END TEST nvmf_identify_passthru 00:34:13.808 ************************************ 00:34:13.808 16:59:17 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:13.808 16:59:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:13.808 16:59:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.808 16:59:17 -- common/autotest_common.sh@10 -- # set +x 00:34:13.808 ************************************ 00:34:13.808 START TEST nvmf_dif 00:34:13.808 ************************************ 00:34:13.808 16:59:17 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:13.808 * Looking for test storage... 00:34:13.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.808 16:59:18 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:13.808 16:59:18 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:34:13.808 16:59:18 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:13.808 16:59:18 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:13.808 16:59:18 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.808 16:59:18 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.808 16:59:18 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.808 16:59:18 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.808 16:59:18 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.808 16:59:18 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.809 --rc genhtml_branch_coverage=1 00:34:13.809 --rc genhtml_function_coverage=1 00:34:13.809 --rc genhtml_legend=1 00:34:13.809 --rc geninfo_all_blocks=1 00:34:13.809 --rc geninfo_unexecuted_blocks=1 00:34:13.809 00:34:13.809 ' 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.809 --rc genhtml_branch_coverage=1 00:34:13.809 --rc genhtml_function_coverage=1 00:34:13.809 --rc genhtml_legend=1 00:34:13.809 --rc geninfo_all_blocks=1 00:34:13.809 --rc geninfo_unexecuted_blocks=1 00:34:13.809 00:34:13.809 ' 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.809 --rc genhtml_branch_coverage=1 00:34:13.809 --rc genhtml_function_coverage=1 00:34:13.809 --rc genhtml_legend=1 00:34:13.809 --rc geninfo_all_blocks=1 00:34:13.809 --rc geninfo_unexecuted_blocks=1 00:34:13.809 00:34:13.809 ' 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:13.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.809 --rc genhtml_branch_coverage=1 00:34:13.809 --rc genhtml_function_coverage=1 00:34:13.809 --rc genhtml_legend=1 00:34:13.809 --rc geninfo_all_blocks=1 00:34:13.809 --rc geninfo_unexecuted_blocks=1 00:34:13.809 00:34:13.809 ' 00:34:13.809 16:59:18 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.809 16:59:18 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.809 16:59:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.809 16:59:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.809 16:59:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.809 16:59:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:13.809 16:59:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:13.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.809 16:59:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:13.809 16:59:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:13.809 16:59:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:13.809 16:59:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:13.809 16:59:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:13.809 16:59:18 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.809 16:59:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:20.425 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:20.425 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:20.425 Found net devices under 0000:86:00.0: cvl_0_0 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:20.425 Found net devices under 0000:86:00.1: cvl_0_1 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:20.425 16:59:23 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.426 16:59:23 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:20.426 16:59:23 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:20.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:20.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:34:20.426 00:34:20.426 --- 10.0.0.2 ping statistics --- 00:34:20.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.426 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:20.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:20.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:34:20.426 00:34:20.426 --- 10.0.0.1 ping statistics --- 00:34:20.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:20.426 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:34:20.426 16:59:24 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:22.332 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:22.332 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:22.332 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:22.332 16:59:26 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.332 16:59:26 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:22.332 16:59:26 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:22.332 16:59:26 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.332 16:59:26 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:22.332 16:59:26 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:22.591 16:59:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:22.591 16:59:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:22.591 16:59:26 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.591 16:59:26 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=794110 00:34:22.591 16:59:26 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 794110 00:34:22.591 16:59:26 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 794110 ']' 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:22.591 16:59:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.591 [2024-10-14 16:59:27.030092] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:34:22.591 [2024-10-14 16:59:27.030134] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.591 [2024-10-14 16:59:27.102964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.591 [2024-10-14 16:59:27.143705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.591 [2024-10-14 16:59:27.143741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.591 [2024-10-14 16:59:27.143749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.591 [2024-10-14 16:59:27.143755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.591 [2024-10-14 16:59:27.143759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.591 [2024-10-14 16:59:27.144307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.850 16:59:27 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:22.851 16:59:27 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 16:59:27 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.851 16:59:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:22.851 16:59:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 [2024-10-14 16:59:27.278498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.851 16:59:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 ************************************ 00:34:22.851 START TEST fio_dif_1_default 00:34:22.851 ************************************ 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 bdev_null0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.851 [2024-10-14 16:59:27.346807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:22.851 { 00:34:22.851 "params": { 00:34:22.851 "name": "Nvme$subsystem", 00:34:22.851 "trtype": "$TEST_TRANSPORT", 00:34:22.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.851 "adrfam": "ipv4", 00:34:22.851 "trsvcid": "$NVMF_PORT", 00:34:22.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.851 "hdgst": ${hdgst:-false}, 00:34:22.851 "ddgst": ${ddgst:-false} 00:34:22.851 }, 00:34:22.851 "method": "bdev_nvme_attach_controller" 00:34:22.851 } 00:34:22.851 EOF 00:34:22.851 )") 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:22.851 "params": { 00:34:22.851 "name": "Nvme0", 00:34:22.851 "trtype": "tcp", 00:34:22.851 "traddr": "10.0.0.2", 00:34:22.851 "adrfam": "ipv4", 00:34:22.851 "trsvcid": "4420", 00:34:22.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:22.851 "hdgst": false, 00:34:22.851 "ddgst": false 00:34:22.851 }, 00:34:22.851 "method": "bdev_nvme_attach_controller" 00:34:22.851 }' 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:22.851 16:59:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.109 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.109 fio-3.35 00:34:23.109 Starting 1 thread 00:34:35.317 00:34:35.317 filename0: (groupid=0, jobs=1): err= 0: pid=794481: Mon Oct 14 16:59:38 2024 00:34:35.317 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:34:35.317 slat (nsec): min=5710, max=26128, avg=6234.29, stdev=1570.37 00:34:35.317 clat (usec): min=40774, max=45269, avg=41008.48, stdev=291.42 00:34:35.317 lat (usec): min=40780, max=45296, avg=41014.71, stdev=291.99 00:34:35.317 clat percentiles (usec): 00:34:35.317 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:35.317 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:35.317 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:35.317 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:34:35.317 | 99.99th=[45351] 00:34:35.317 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:34:35.317 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:35.317 lat (msec) : 50=100.00% 00:34:35.317 cpu : usr=92.25%, sys=7.50%, ctx=10, majf=0, minf=0 00:34:35.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.317 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:35.317 00:34:35.317 Run status group 0 (all jobs): 00:34:35.317 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:34:35.317 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:35.317 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:35.317 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:35.317 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:35.317 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:35.317 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 00:34:35.318 real 0m11.177s 00:34:35.318 user 0m16.004s 00:34:35.318 sys 0m1.049s 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 ************************************ 00:34:35.318 END TEST fio_dif_1_default 00:34:35.318 ************************************ 00:34:35.318 16:59:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:35.318 16:59:38 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:35.318 16:59:38 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 ************************************ 00:34:35.318 START TEST fio_dif_1_multi_subsystems 00:34:35.318 ************************************ 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 bdev_null0 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 [2024-10-14 16:59:38.593074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 bdev_null1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:35.318 { 00:34:35.318 "params": { 00:34:35.318 "name": "Nvme$subsystem", 00:34:35.318 "trtype": "$TEST_TRANSPORT", 00:34:35.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.318 "adrfam": "ipv4", 00:34:35.318 "trsvcid": "$NVMF_PORT", 00:34:35.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.318 "hdgst": ${hdgst:-false}, 00:34:35.318 "ddgst": ${ddgst:-false} 00:34:35.318 }, 00:34:35.318 "method": "bdev_nvme_attach_controller" 00:34:35.318 } 00:34:35.318 EOF 00:34:35.318 )") 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:35.318 { 00:34:35.318 "params": { 00:34:35.318 "name": "Nvme$subsystem", 00:34:35.318 "trtype": "$TEST_TRANSPORT", 00:34:35.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.318 "adrfam": "ipv4", 00:34:35.318 "trsvcid": "$NVMF_PORT", 00:34:35.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.318 "hdgst": ${hdgst:-false}, 00:34:35.318 "ddgst": ${ddgst:-false} 00:34:35.318 }, 00:34:35.318 "method": "bdev_nvme_attach_controller" 00:34:35.318 } 00:34:35.318 EOF 00:34:35.318 )") 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:34:35.318 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:35.318 "params": { 00:34:35.318 "name": "Nvme0", 00:34:35.318 "trtype": "tcp", 00:34:35.318 "traddr": "10.0.0.2", 00:34:35.318 "adrfam": "ipv4", 00:34:35.318 "trsvcid": "4420", 00:34:35.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.318 "hdgst": false, 00:34:35.318 "ddgst": false 00:34:35.318 }, 00:34:35.318 "method": "bdev_nvme_attach_controller" 00:34:35.318 },{ 00:34:35.318 "params": { 00:34:35.318 "name": "Nvme1", 00:34:35.318 "trtype": "tcp", 00:34:35.318 "traddr": "10.0.0.2", 00:34:35.318 "adrfam": "ipv4", 00:34:35.318 "trsvcid": "4420", 00:34:35.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:35.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:35.318 "hdgst": false, 00:34:35.318 "ddgst": false 00:34:35.318 }, 00:34:35.319 "method": "bdev_nvme_attach_controller" 00:34:35.319 }' 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:35.319 16:59:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:35.319 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:35.319 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:35.319 fio-3.35 00:34:35.319 Starting 2 threads 00:34:45.296 00:34:45.296 filename0: (groupid=0, jobs=1): err= 0: pid=796450: Mon Oct 14 16:59:49 2024 00:34:45.296 read: IOPS=201, BW=806KiB/s (825kB/s)(8080KiB/10028msec) 00:34:45.296 slat (nsec): min=5829, max=31354, avg=6981.88, stdev=2201.24 00:34:45.296 clat (usec): min=389, max=42545, avg=19836.81, stdev=20484.40 00:34:45.296 lat (usec): min=395, max=42552, avg=19843.80, stdev=20483.80 00:34:45.296 clat percentiles (usec): 00:34:45.296 | 1.00th=[ 408], 5.00th=[ 449], 10.00th=[ 469], 20.00th=[ 482], 00:34:45.296 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 611], 60.00th=[41157], 00:34:45.296 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:34:45.296 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:45.296 | 99.99th=[42730] 00:34:45.296 bw ( KiB/s): min= 768, max= 896, per=67.44%, avg=806.40, stdev=48.25, samples=20 00:34:45.296 iops : min= 192, max= 224, avg=201.60, stdev=12.06, samples=20 00:34:45.296 lat (usec) : 500=36.88%, 750=15.59%, 1000=0.20% 00:34:45.296 lat (msec) : 2=0.20%, 50=47.13% 00:34:45.296 cpu : usr=96.63%, sys=3.12%, ctx=14, majf=0, minf=107 00:34:45.296 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.296 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.296 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:45.296 filename1: (groupid=0, jobs=1): err= 0: pid=796451: Mon Oct 14 16:59:49 2024 00:34:45.296 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:34:45.296 slat (nsec): min=5857, max=29101, avg=7598.20, stdev=2558.47 00:34:45.296 clat (usec): min=40826, max=42007, avg=41021.56, stdev=205.78 00:34:45.296 lat (usec): min=40832, max=42018, avg=41029.16, stdev=206.04 00:34:45.296 clat percentiles (usec): 00:34:45.296 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:45.296 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:45.296 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:45.296 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:45.296 | 99.99th=[42206] 00:34:45.296 bw ( KiB/s): min= 384, max= 416, per=32.47%, avg=388.80, stdev=11.72, samples=20 00:34:45.296 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:45.296 lat (msec) : 50=100.00% 00:34:45.296 cpu : usr=96.37%, sys=3.40%, ctx=10, majf=0, minf=39 00:34:45.296 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:45.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:45.296 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:45.296 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:45.296 00:34:45.296 Run status group 0 (all jobs): 00:34:45.296 READ: bw=1195KiB/s (1224kB/s), 390KiB/s-806KiB/s (399kB/s-825kB/s), io=11.7MiB (12.3MB), run=10015-10028msec 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 00:34:45.556 real 0m11.419s 00:34:45.556 user 0m26.702s 00:34:45.556 sys 0m0.961s 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:45.556 16:59:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 ************************************ 00:34:45.556 END TEST fio_dif_1_multi_subsystems 00:34:45.556 ************************************ 00:34:45.556 16:59:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:45.556 16:59:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:45.556 16:59:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:45.556 16:59:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 ************************************ 00:34:45.556 START TEST fio_dif_rand_params 00:34:45.556 ************************************ 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 bdev_null0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:45.556 [2024-10-14 16:59:50.088521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:45.556 { 00:34:45.556 "params": { 00:34:45.556 "name": "Nvme$subsystem", 00:34:45.556 "trtype": "$TEST_TRANSPORT", 00:34:45.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.556 "adrfam": "ipv4", 00:34:45.556 "trsvcid": "$NVMF_PORT", 00:34:45.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.556 "hdgst": ${hdgst:-false}, 00:34:45.556 "ddgst": ${ddgst:-false} 00:34:45.556 }, 00:34:45.556 "method": "bdev_nvme_attach_controller" 00:34:45.556 } 00:34:45.556 EOF 00:34:45.556 )") 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.556 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:45.557 "params": { 00:34:45.557 "name": "Nvme0", 00:34:45.557 "trtype": "tcp", 00:34:45.557 "traddr": "10.0.0.2", 00:34:45.557 "adrfam": "ipv4", 00:34:45.557 "trsvcid": "4420", 00:34:45.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:45.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:45.557 "hdgst": false, 00:34:45.557 "ddgst": false 00:34:45.557 }, 00:34:45.557 "method": "bdev_nvme_attach_controller" 00:34:45.557 }' 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:45.557 16:59:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.815 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:45.815 ... 00:34:45.815 fio-3.35 00:34:45.815 Starting 3 threads 00:34:52.383 00:34:52.383 filename0: (groupid=0, jobs=1): err= 0: pid=798411: Mon Oct 14 16:59:56 2024 00:34:52.383 read: IOPS=319, BW=40.0MiB/s (41.9MB/s)(200MiB/5006msec) 00:34:52.383 slat (nsec): min=6046, max=31605, avg=10824.61, stdev=2186.44 00:34:52.383 clat (usec): min=2833, max=50971, avg=9369.97, stdev=6696.73 00:34:52.383 lat (usec): min=2840, max=50983, avg=9380.80, stdev=6696.61 00:34:52.383 clat percentiles (usec): 00:34:52.383 | 1.00th=[ 3523], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 7439], 00:34:52.383 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:34:52.383 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10683], 00:34:52.383 | 99.00th=[49021], 99.50th=[49021], 99.90th=[50070], 99.95th=[51119], 00:34:52.383 | 99.99th=[51119] 00:34:52.383 bw ( KiB/s): min=30720, max=47104, per=34.05%, avg=40908.80, stdev=5888.37, samples=10 00:34:52.383 iops : min= 240, max= 368, avg=319.60, stdev=46.00, samples=10 00:34:52.383 lat (msec) : 4=2.38%, 10=86.69%, 20=8.12%, 50=2.62%, 100=0.19% 00:34:52.383 cpu : usr=94.67%, sys=5.03%, ctx=15, majf=0, minf=37 00:34:52.383 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.383 issued rwts: total=1600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.383 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:52.383 filename0: (groupid=0, jobs=1): err= 0: pid=798412: Mon Oct 14 16:59:56 2024 00:34:52.383 read: IOPS=324, BW=40.5MiB/s (42.5MB/s)(205MiB/5045msec) 00:34:52.383 slat (nsec): min=6099, max=53583, avg=10909.49, stdev=2200.91 00:34:52.383 clat (usec): min=3723, max=50099, avg=9212.59, stdev=4759.92 00:34:52.383 lat (usec): min=3730, max=50110, avg=9223.50, stdev=4759.91 00:34:52.383 clat percentiles (usec): 00:34:52.383 | 1.00th=[ 5276], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 7177], 00:34:52.383 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:34:52.383 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10814], 95.00th=[11469], 00:34:52.383 | 99.00th=[45351], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:34:52.383 | 99.99th=[50070] 00:34:52.383 bw ( KiB/s): min=31232, max=46848, per=34.82%, avg=41830.40, stdev=4412.98, samples=10 00:34:52.383 iops : min= 244, max= 366, avg=326.80, stdev=34.48, samples=10 00:34:52.383 lat (msec) : 4=0.37%, 10=78.91%, 20=19.32%, 50=1.28%, 100=0.12% 00:34:52.383 cpu : usr=94.43%, sys=5.25%, ctx=9, majf=0, minf=117 00:34:52.383 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.383 issued rwts: total=1636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.383 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:52.383 filename0: (groupid=0, jobs=1): err= 0: pid=798413: Mon Oct 14 16:59:56 2024 00:34:52.383 read: IOPS=297, BW=37.1MiB/s (38.9MB/s)(187MiB/5045msec) 00:34:52.383 slat (nsec): min=6051, max=28480, avg=10718.84, stdev=2099.32 00:34:52.383 clat (usec): min=4348, max=51603, avg=10053.75, stdev=6764.26 00:34:52.383 lat (usec): min=4354, max=51615, avg=10064.47, stdev=6764.25 00:34:52.383 clat percentiles (usec): 00:34:52.383 | 1.00th=[ 5145], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7832], 00:34:52.383 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:34:52.383 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11076], 95.00th=[11600], 00:34:52.383 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50594], 99.95th=[51643], 00:34:52.383 | 99.99th=[51643] 00:34:52.383 bw ( KiB/s): min=30208, max=43520, per=31.90%, avg=38323.20, stdev=4683.32, samples=10 00:34:52.383 iops : min= 236, max= 340, avg=299.40, stdev=36.59, samples=10 00:34:52.383 lat (msec) : 10=73.72%, 20=23.35%, 50=2.67%, 100=0.27% 00:34:52.383 cpu : usr=94.57%, sys=5.13%, ctx=16, majf=0, minf=21 00:34:52.384 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:52.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.384 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:52.384 00:34:52.384 Run status group 0 (all jobs): 00:34:52.384 READ: bw=117MiB/s (123MB/s), 37.1MiB/s-40.5MiB/s (38.9MB/s-42.5MB/s), io=592MiB (621MB), run=5006-5045msec 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 bdev_null0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 [2024-10-14 16:59:56.343476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 bdev_null1 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 bdev_null2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:52.384 { 00:34:52.384 "params": { 00:34:52.384 "name": "Nvme$subsystem", 00:34:52.384 "trtype": "$TEST_TRANSPORT", 00:34:52.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.384 "adrfam": "ipv4", 00:34:52.384 "trsvcid": "$NVMF_PORT", 00:34:52.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.384 "hdgst": ${hdgst:-false}, 00:34:52.384 "ddgst": ${ddgst:-false} 00:34:52.384 }, 00:34:52.384 "method": "bdev_nvme_attach_controller" 00:34:52.384 } 00:34:52.384 EOF 00:34:52.384 )") 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:52.384 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:52.385 { 00:34:52.385 "params": { 00:34:52.385 "name": "Nvme$subsystem", 00:34:52.385 "trtype": "$TEST_TRANSPORT", 00:34:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.385 "adrfam": "ipv4", 00:34:52.385 "trsvcid": "$NVMF_PORT", 00:34:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.385 "hdgst": ${hdgst:-false}, 00:34:52.385 "ddgst": ${ddgst:-false} 00:34:52.385 }, 00:34:52.385 "method": "bdev_nvme_attach_controller" 00:34:52.385 } 00:34:52.385 EOF 00:34:52.385 )") 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:52.385 { 00:34:52.385 "params": { 00:34:52.385 "name": "Nvme$subsystem", 00:34:52.385 "trtype": "$TEST_TRANSPORT", 00:34:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.385 "adrfam": "ipv4", 00:34:52.385 "trsvcid": "$NVMF_PORT", 00:34:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.385 "hdgst": ${hdgst:-false}, 00:34:52.385 "ddgst": ${ddgst:-false} 00:34:52.385 }, 00:34:52.385 "method": "bdev_nvme_attach_controller" 00:34:52.385 } 00:34:52.385 EOF 00:34:52.385 )") 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:52.385 "params": { 00:34:52.385 "name": "Nvme0", 00:34:52.385 "trtype": "tcp", 00:34:52.385 "traddr": "10.0.0.2", 00:34:52.385 "adrfam": "ipv4", 00:34:52.385 "trsvcid": "4420", 00:34:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:52.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:52.385 "hdgst": false, 00:34:52.385 "ddgst": false 00:34:52.385 }, 00:34:52.385 "method": "bdev_nvme_attach_controller" 00:34:52.385 },{ 00:34:52.385 "params": { 00:34:52.385 "name": "Nvme1", 00:34:52.385 "trtype": "tcp", 00:34:52.385 "traddr": "10.0.0.2", 00:34:52.385 "adrfam": "ipv4", 00:34:52.385 "trsvcid": "4420", 00:34:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:52.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:52.385 "hdgst": false, 00:34:52.385 "ddgst": false 00:34:52.385 }, 00:34:52.385 "method": "bdev_nvme_attach_controller" 00:34:52.385 },{ 00:34:52.385 "params": { 00:34:52.385 "name": "Nvme2", 00:34:52.385 "trtype": "tcp", 00:34:52.385 "traddr": "10.0.0.2", 00:34:52.385 "adrfam": "ipv4", 00:34:52.385 "trsvcid": "4420", 00:34:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:52.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:52.385 "hdgst": false, 00:34:52.385 "ddgst": false 00:34:52.385 }, 00:34:52.385 "method": "bdev_nvme_attach_controller" 00:34:52.385 }' 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:52.385 16:59:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.385 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:52.385 ... 00:34:52.385 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:52.385 ... 00:34:52.385 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:52.385 ... 00:34:52.385 fio-3.35 00:34:52.385 Starting 24 threads 00:35:04.579 00:35:04.579 filename0: (groupid=0, jobs=1): err= 0: pid=799467: Mon Oct 14 17:00:07 2024 00:35:04.579 read: IOPS=64, BW=257KiB/s (263kB/s)(2600KiB/10107msec) 00:35:04.579 slat (nsec): min=7289, max=30580, avg=9291.74, stdev=2640.03 00:35:04.579 clat (msec): min=33, max=409, avg=248.71, stdev=74.09 00:35:04.579 lat (msec): min=33, max=409, avg=248.72, stdev=74.08 00:35:04.579 clat percentiles (msec): 00:35:04.579 | 1.00th=[ 34], 5.00th=[ 95], 10.00th=[ 165], 20.00th=[ 207], 00:35:04.579 | 30.00th=[ 224], 40.00th=[ 251], 50.00th=[ 268], 60.00th=[ 271], 00:35:04.579 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 334], 95.00th=[ 368], 00:35:04.579 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:35:04.579 | 99.99th=[ 409] 00:35:04.579 bw ( KiB/s): min= 176, max= 480, per=4.46%, avg=253.60, stdev=64.89, samples=20 00:35:04.579 iops : min= 44, max= 120, avg=63.40, stdev=16.22, samples=20 00:35:04.579 lat (msec) : 50=2.46%, 100=3.69%, 250=32.31%, 500=61.54% 00:35:04.579 cpu : usr=98.70%, sys=0.91%, ctx=59, majf=0, minf=19 00:35:04.579 IO depths : 1=0.3%, 2=0.8%, 4=6.8%, 8=79.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:35:04.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 complete : 0=0.0%, 4=88.7%, 8=6.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.579 filename0: (groupid=0, jobs=1): err= 0: pid=799468: Mon Oct 14 17:00:07 2024 00:35:04.579 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10073msec) 00:35:04.579 slat (nsec): min=4182, max=20905, avg=9018.96, stdev=2324.71 00:35:04.579 clat (msec): min=97, max=432, avg=271.50, stdev=58.95 00:35:04.579 lat (msec): min=97, max=432, avg=271.51, stdev=58.95 00:35:04.579 clat percentiles (msec): 00:35:04.579 | 1.00th=[ 99], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 234], 00:35:04.579 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 266], 00:35:04.579 | 70.00th=[ 271], 80.00th=[ 300], 90.00th=[ 380], 95.00th=[ 401], 00:35:04.579 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:04.579 | 99.99th=[ 435] 00:35:04.579 bw ( KiB/s): min= 128, max= 368, per=4.06%, avg=230.40, stdev=48.81, samples=20 00:35:04.579 iops : min= 32, max= 92, avg=57.60, stdev=12.20, samples=20 00:35:04.579 lat (msec) : 100=1.01%, 250=48.65%, 500=50.34% 00:35:04.579 cpu : usr=98.80%, sys=0.83%, ctx=9, majf=0, minf=29 00:35:04.579 IO depths : 1=0.7%, 2=2.9%, 4=11.8%, 8=72.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:04.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.579 filename0: (groupid=0, jobs=1): err= 0: pid=799469: Mon Oct 14 17:00:07 2024 00:35:04.579 read: IOPS=41, BW=166KiB/s (169kB/s)(1664KiB/10054msec) 00:35:04.579 slat (nsec): min=7274, max=42875, avg=10317.92, stdev=5917.85 00:35:04.579 clat (msec): min=225, max=621, avg=386.60, stdev=75.92 00:35:04.579 lat (msec): min=225, max=621, avg=386.61, stdev=75.92 00:35:04.579 clat percentiles (msec): 00:35:04.579 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 309], 20.00th=[ 355], 00:35:04.579 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 376], 60.00th=[ 380], 00:35:04.579 | 70.00th=[ 405], 80.00th=[ 430], 90.00th=[ 439], 95.00th=[ 542], 00:35:04.579 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:35:04.579 | 99.99th=[ 625] 00:35:04.579 bw ( KiB/s): min= 112, max= 256, per=2.96%, avg=168.42, stdev=59.95, samples=19 00:35:04.579 iops : min= 28, max= 64, avg=42.11, stdev=14.99, samples=19 00:35:04.579 lat (msec) : 250=5.29%, 500=87.50%, 750=7.21% 00:35:04.579 cpu : usr=98.77%, sys=0.86%, ctx=15, majf=0, minf=20 00:35:04.579 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:04.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.579 filename0: (groupid=0, jobs=1): err= 0: pid=799470: Mon Oct 14 17:00:07 2024 00:35:04.579 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10056msec) 00:35:04.579 slat (nsec): min=7328, max=32302, avg=9696.38, stdev=3165.01 00:35:04.579 clat (msec): min=115, max=433, avg=271.69, stdev=57.13 00:35:04.579 lat (msec): min=115, max=433, avg=271.70, stdev=57.13 00:35:04.579 clat percentiles (msec): 00:35:04.579 | 1.00th=[ 116], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 239], 00:35:04.579 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 268], 00:35:04.579 | 70.00th=[ 275], 80.00th=[ 296], 90.00th=[ 380], 95.00th=[ 401], 00:35:04.579 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:04.579 | 99.99th=[ 435] 00:35:04.579 bw ( KiB/s): min= 128, max= 368, per=4.06%, avg=230.40, stdev=48.81, samples=20 00:35:04.579 iops : min= 32, max= 92, avg=57.60, stdev=12.20, samples=20 00:35:04.579 lat (msec) : 250=50.00%, 500=50.00% 00:35:04.579 cpu : usr=98.72%, sys=0.91%, ctx=14, majf=0, minf=36 00:35:04.579 IO depths : 1=0.7%, 2=2.2%, 4=9.8%, 8=74.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:04.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 complete : 0=0.0%, 4=89.5%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.579 filename0: (groupid=0, jobs=1): err= 0: pid=799471: Mon Oct 14 17:00:07 2024 00:35:04.579 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10089msec) 00:35:04.579 slat (nsec): min=7239, max=33499, avg=10635.96, stdev=4455.50 00:35:04.579 clat (msec): min=135, max=316, avg=252.09, stdev=20.70 00:35:04.579 lat (msec): min=135, max=316, avg=252.10, stdev=20.69 00:35:04.579 clat percentiles (msec): 00:35:04.579 | 1.00th=[ 211], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 226], 00:35:04.579 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.579 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.579 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:35:04.579 | 99.99th=[ 317] 00:35:04.579 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.17, samples=20 00:35:04.579 iops : min= 36, max= 92, avg=62.40, stdev=11.04, samples=20 00:35:04.579 lat (msec) : 250=40.00%, 500=60.00% 00:35:04.579 cpu : usr=98.84%, sys=0.78%, ctx=14, majf=0, minf=34 00:35:04.579 IO depths : 1=0.5%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:04.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.579 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.579 filename0: (groupid=0, jobs=1): err= 0: pid=799472: Mon Oct 14 17:00:07 2024 00:35:04.579 read: IOPS=60, BW=242KiB/s (248kB/s)(2440KiB/10071msec) 00:35:04.579 slat (nsec): min=6026, max=36122, avg=9519.96, stdev=4242.22 00:35:04.579 clat (msec): min=123, max=434, avg=263.58, stdev=46.44 00:35:04.579 lat (msec): min=123, max=434, avg=263.59, stdev=46.44 00:35:04.579 clat percentiles (msec): 00:35:04.579 | 1.00th=[ 209], 5.00th=[ 215], 10.00th=[ 220], 20.00th=[ 226], 00:35:04.579 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 268], 00:35:04.579 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 313], 95.00th=[ 376], 00:35:04.579 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:04.579 | 99.99th=[ 435] 00:35:04.580 bw ( KiB/s): min= 128, max= 368, per=4.18%, avg=237.60, stdev=47.37, samples=20 00:35:04.580 iops : min= 32, max= 92, avg=59.40, stdev=11.84, samples=20 00:35:04.580 lat (msec) : 250=40.98%, 500=59.02% 00:35:04.580 cpu : usr=98.85%, sys=0.76%, ctx=13, majf=0, minf=29 00:35:04.580 IO depths : 1=0.5%, 2=1.8%, 4=9.5%, 8=75.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=89.5%, 8=5.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=799473: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10089msec) 00:35:04.580 slat (nsec): min=6429, max=48389, avg=11054.52, stdev=5908.82 00:35:04.580 clat (msec): min=157, max=287, avg=252.09, stdev=20.02 00:35:04.580 lat (msec): min=157, max=287, avg=252.10, stdev=20.02 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 211], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 226], 00:35:04.580 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.580 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.580 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:35:04.580 | 99.99th=[ 288] 00:35:04.580 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.47, samples=20 00:35:04.580 iops : min= 36, max= 92, avg=62.40, stdev=11.12, samples=20 00:35:04.580 lat (msec) : 250=39.69%, 500=60.31% 00:35:04.580 cpu : usr=98.82%, sys=0.79%, ctx=27, majf=0, minf=19 00:35:04.580 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename0: (groupid=0, jobs=1): err= 0: pid=799474: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=65, BW=260KiB/s (266kB/s)(2624KiB/10089msec) 00:35:04.580 slat (nsec): min=7255, max=45230, avg=10619.87, stdev=4560.57 00:35:04.580 clat (msec): min=101, max=285, avg=245.93, stdev=36.95 00:35:04.580 lat (msec): min=101, max=285, avg=245.94, stdev=36.95 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 102], 5.00th=[ 163], 10.00th=[ 222], 20.00th=[ 226], 00:35:04.580 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.580 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.580 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:35:04.580 | 99.99th=[ 288] 00:35:04.580 bw ( KiB/s): min= 144, max= 384, per=4.50%, avg=256.00, stdev=53.45, samples=20 00:35:04.580 iops : min= 36, max= 96, avg=64.00, stdev=13.36, samples=20 00:35:04.580 lat (msec) : 250=41.16%, 500=58.84% 00:35:04.580 cpu : usr=98.79%, sys=0.83%, ctx=8, majf=0, minf=27 00:35:04.580 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename1: (groupid=0, jobs=1): err= 0: pid=799475: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10089msec) 00:35:04.580 slat (nsec): min=7264, max=34281, avg=11053.03, stdev=5125.80 00:35:04.580 clat (msec): min=152, max=300, avg=252.09, stdev=20.25 00:35:04.580 lat (msec): min=152, max=300, avg=252.10, stdev=20.25 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 211], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 226], 00:35:04.580 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.580 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.580 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 300], 00:35:04.580 | 99.99th=[ 300] 00:35:04.580 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.47, samples=20 00:35:04.580 iops : min= 36, max= 92, avg=62.40, stdev=11.12, samples=20 00:35:04.580 lat (msec) : 250=40.00%, 500=60.00% 00:35:04.580 cpu : usr=98.82%, sys=0.79%, ctx=12, majf=0, minf=27 00:35:04.580 IO depths : 1=0.3%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename1: (groupid=0, jobs=1): err= 0: pid=799476: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=59, BW=238KiB/s (244kB/s)(2400KiB/10080msec) 00:35:04.580 slat (nsec): min=4154, max=27984, avg=8982.46, stdev=2340.21 00:35:04.580 clat (msec): min=122, max=432, avg=268.06, stdev=49.82 00:35:04.580 lat (msec): min=122, max=432, avg=268.07, stdev=49.82 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 124], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 243], 00:35:04.580 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 268], 00:35:04.580 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 351], 95.00th=[ 401], 00:35:04.580 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:04.580 | 99.99th=[ 435] 00:35:04.580 bw ( KiB/s): min= 128, max= 304, per=4.11%, avg=233.60, stdev=41.01, samples=20 00:35:04.580 iops : min= 32, max= 76, avg=58.40, stdev=10.25, samples=20 00:35:04.580 lat (msec) : 250=36.00%, 500=64.00% 00:35:04.580 cpu : usr=98.71%, sys=0.91%, ctx=8, majf=0, minf=33 00:35:04.580 IO depths : 1=0.3%, 2=1.5%, 4=9.0%, 8=76.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=89.4%, 8=5.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename1: (groupid=0, jobs=1): err= 0: pid=799477: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=66, BW=264KiB/s (271kB/s)(2672KiB/10105msec) 00:35:04.580 slat (nsec): min=6131, max=40407, avg=10653.14, stdev=5697.41 00:35:04.580 clat (msec): min=30, max=343, avg=241.64, stdev=56.77 00:35:04.580 lat (msec): min=30, max=343, avg=241.65, stdev=56.77 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 31], 5.00th=[ 95], 10.00th=[ 215], 20.00th=[ 226], 00:35:04.580 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 266], 00:35:04.580 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.580 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:35:04.580 | 99.99th=[ 342] 00:35:04.580 bw ( KiB/s): min= 176, max= 512, per=4.59%, avg=260.80, stdev=66.09, samples=20 00:35:04.580 iops : min= 44, max= 128, avg=65.20, stdev=16.52, samples=20 00:35:04.580 lat (msec) : 50=2.40%, 100=4.79%, 250=30.84%, 500=61.98% 00:35:04.580 cpu : usr=98.85%, sys=0.78%, ctx=14, majf=0, minf=67 00:35:04.580 IO depths : 1=0.7%, 2=2.2%, 4=10.6%, 8=74.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=90.0%, 8=4.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename1: (groupid=0, jobs=1): err= 0: pid=799478: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=57, BW=231KiB/s (236kB/s)(2328KiB/10084msec) 00:35:04.580 slat (nsec): min=4238, max=28719, avg=9005.01, stdev=2163.23 00:35:04.580 clat (msec): min=110, max=539, avg=276.45, stdev=69.06 00:35:04.580 lat (msec): min=110, max=539, avg=276.46, stdev=69.06 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 111], 5.00th=[ 215], 10.00th=[ 224], 20.00th=[ 239], 00:35:04.580 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 257], 60.00th=[ 264], 00:35:04.580 | 70.00th=[ 268], 80.00th=[ 288], 90.00th=[ 376], 95.00th=[ 435], 00:35:04.580 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:35:04.580 | 99.99th=[ 542] 00:35:04.580 bw ( KiB/s): min= 176, max= 368, per=4.20%, avg=238.32, stdev=42.96, samples=19 00:35:04.580 iops : min= 44, max= 92, avg=59.58, stdev=10.74, samples=19 00:35:04.580 lat (msec) : 250=35.74%, 500=61.51%, 750=2.75% 00:35:04.580 cpu : usr=98.66%, sys=0.96%, ctx=9, majf=0, minf=24 00:35:04.580 IO depths : 1=0.5%, 2=1.9%, 4=9.3%, 8=75.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:04.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.580 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.580 filename1: (groupid=0, jobs=1): err= 0: pid=799479: Mon Oct 14 17:00:07 2024 00:35:04.580 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10086msec) 00:35:04.580 slat (nsec): min=7329, max=40818, avg=9506.50, stdev=3429.35 00:35:04.580 clat (msec): min=101, max=291, avg=250.31, stdev=30.20 00:35:04.580 lat (msec): min=101, max=291, avg=250.32, stdev=30.20 00:35:04.580 clat percentiles (msec): 00:35:04.580 | 1.00th=[ 103], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 234], 00:35:04.580 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 264], 00:35:04.580 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.580 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 292], 00:35:04.580 | 99.99th=[ 292] 00:35:04.581 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.47, samples=20 00:35:04.581 iops : min= 36, max= 92, avg=62.40, stdev=11.12, samples=20 00:35:04.581 lat (msec) : 250=39.69%, 500=60.31% 00:35:04.581 cpu : usr=98.80%, sys=0.83%, ctx=8, majf=0, minf=34 00:35:04.581 IO depths : 1=0.6%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=799480: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=42, BW=171KiB/s (175kB/s)(1720KiB/10056msec) 00:35:04.581 slat (nsec): min=7301, max=31902, avg=8992.85, stdev=2333.34 00:35:04.581 clat (msec): min=80, max=536, avg=374.07, stdev=76.41 00:35:04.581 lat (msec): min=80, max=536, avg=374.08, stdev=76.41 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 81], 5.00th=[ 255], 10.00th=[ 300], 20.00th=[ 351], 00:35:04.581 | 30.00th=[ 363], 40.00th=[ 376], 50.00th=[ 376], 60.00th=[ 384], 00:35:04.581 | 70.00th=[ 405], 80.00th=[ 430], 90.00th=[ 439], 95.00th=[ 477], 00:35:04.581 | 99.00th=[ 531], 99.50th=[ 535], 99.90th=[ 535], 99.95th=[ 535], 00:35:04.581 | 99.99th=[ 535] 00:35:04.581 bw ( KiB/s): min= 128, max= 256, per=2.91%, avg=165.60, stdev=55.73, samples=20 00:35:04.581 iops : min= 32, max= 64, avg=41.40, stdev=13.93, samples=20 00:35:04.581 lat (msec) : 100=3.26%, 250=0.93%, 500=92.56%, 750=3.26% 00:35:04.581 cpu : usr=98.64%, sys=0.98%, ctx=12, majf=0, minf=20 00:35:04.581 IO depths : 1=3.5%, 2=9.8%, 4=25.1%, 8=52.8%, 16=8.8%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=799481: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=58, BW=236KiB/s (241kB/s)(2376KiB/10084msec) 00:35:04.581 slat (nsec): min=6072, max=33877, avg=9266.68, stdev=3045.26 00:35:04.581 clat (msec): min=95, max=432, avg=270.86, stdev=57.81 00:35:04.581 lat (msec): min=95, max=432, avg=270.87, stdev=57.81 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 95], 5.00th=[ 213], 10.00th=[ 222], 20.00th=[ 232], 00:35:04.581 | 30.00th=[ 236], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 271], 00:35:04.581 | 70.00th=[ 284], 80.00th=[ 305], 90.00th=[ 363], 95.00th=[ 401], 00:35:04.581 | 99.00th=[ 430], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:04.581 | 99.99th=[ 435] 00:35:04.581 bw ( KiB/s): min= 128, max= 304, per=4.07%, avg=231.20, stdev=42.96, samples=20 00:35:04.581 iops : min= 32, max= 76, avg=57.80, stdev=10.74, samples=20 00:35:04.581 lat (msec) : 100=1.01%, 250=47.47%, 500=51.52% 00:35:04.581 cpu : usr=98.80%, sys=0.83%, ctx=13, majf=0, minf=25 00:35:04.581 IO depths : 1=0.3%, 2=0.8%, 4=6.6%, 8=79.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=88.5%, 8=7.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename1: (groupid=0, jobs=1): err= 0: pid=799482: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=63, BW=254KiB/s (260kB/s)(2560KiB/10089msec) 00:35:04.581 slat (nsec): min=7295, max=33121, avg=10587.03, stdev=4200.07 00:35:04.581 clat (msec): min=146, max=305, avg=252.09, stdev=20.40 00:35:04.581 lat (msec): min=146, max=305, avg=252.10, stdev=20.40 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 211], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 226], 00:35:04.581 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.581 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.581 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 305], 00:35:04.581 | 99.99th=[ 305] 00:35:04.581 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.47, samples=20 00:35:04.581 iops : min= 36, max= 92, avg=62.40, stdev=11.12, samples=20 00:35:04.581 lat (msec) : 250=40.00%, 500=60.00% 00:35:04.581 cpu : usr=98.64%, sys=0.98%, ctx=6, majf=0, minf=23 00:35:04.581 IO depths : 1=0.3%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=799483: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10128msec) 00:35:04.581 slat (nsec): min=7298, max=32685, avg=10458.80, stdev=4196.35 00:35:04.581 clat (msec): min=179, max=309, avg=252.21, stdev=19.96 00:35:04.581 lat (msec): min=179, max=309, avg=252.22, stdev=19.96 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 211], 5.00th=[ 215], 10.00th=[ 224], 20.00th=[ 226], 00:35:04.581 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.581 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.581 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:35:04.581 | 99.99th=[ 309] 00:35:04.581 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.47, samples=20 00:35:04.581 iops : min= 36, max= 92, avg=62.40, stdev=11.12, samples=20 00:35:04.581 lat (msec) : 250=40.00%, 500=60.00% 00:35:04.581 cpu : usr=98.58%, sys=1.05%, ctx=14, majf=0, minf=32 00:35:04.581 IO depths : 1=0.3%, 2=6.6%, 4=25.0%, 8=55.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=799484: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=60, BW=243KiB/s (248kB/s)(2440KiB/10056msec) 00:35:04.581 slat (nsec): min=4307, max=21459, avg=8905.44, stdev=2091.97 00:35:04.581 clat (msec): min=90, max=439, avg=263.55, stdev=39.87 00:35:04.581 lat (msec): min=90, max=439, avg=263.56, stdev=39.87 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 215], 5.00th=[ 222], 10.00th=[ 226], 20.00th=[ 245], 00:35:04.581 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 268], 00:35:04.581 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 275], 95.00th=[ 363], 00:35:04.581 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 439], 00:35:04.581 | 99.99th=[ 439] 00:35:04.581 bw ( KiB/s): min= 128, max= 256, per=4.18%, avg=237.60, stdev=36.81, samples=20 00:35:04.581 iops : min= 32, max= 64, avg=59.40, stdev= 9.20, samples=20 00:35:04.581 lat (msec) : 100=0.98%, 250=30.49%, 500=68.52% 00:35:04.581 cpu : usr=98.69%, sys=0.93%, ctx=11, majf=0, minf=22 00:35:04.581 IO depths : 1=0.7%, 2=1.5%, 4=8.5%, 8=77.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=799485: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10056msec) 00:35:04.581 slat (nsec): min=4616, max=32813, avg=10100.15, stdev=4769.49 00:35:04.581 clat (msec): min=105, max=434, avg=271.68, stdev=56.59 00:35:04.581 lat (msec): min=105, max=434, avg=271.69, stdev=56.59 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 106], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 236], 00:35:04.581 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 266], 00:35:04.581 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 376], 95.00th=[ 401], 00:35:04.581 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:04.581 | 99.99th=[ 435] 00:35:04.581 bw ( KiB/s): min= 128, max= 368, per=4.06%, avg=230.40, stdev=48.81, samples=20 00:35:04.581 iops : min= 32, max= 92, avg=57.60, stdev=12.20, samples=20 00:35:04.581 lat (msec) : 250=37.84%, 500=62.16% 00:35:04.581 cpu : usr=98.83%, sys=0.79%, ctx=13, majf=0, minf=20 00:35:04.581 IO depths : 1=0.7%, 2=2.2%, 4=9.8%, 8=74.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=89.5%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=799486: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=62, BW=249KiB/s (255kB/s)(2512KiB/10087msec) 00:35:04.581 slat (nsec): min=7161, max=38139, avg=9476.57, stdev=3403.24 00:35:04.581 clat (msec): min=101, max=428, avg=256.14, stdev=44.78 00:35:04.581 lat (msec): min=101, max=428, avg=256.15, stdev=44.78 00:35:04.581 clat percentiles (msec): 00:35:04.581 | 1.00th=[ 103], 5.00th=[ 215], 10.00th=[ 220], 20.00th=[ 228], 00:35:04.581 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 257], 60.00th=[ 268], 00:35:04.581 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 326], 00:35:04.581 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:35:04.581 | 99.99th=[ 430] 00:35:04.581 bw ( KiB/s): min= 176, max= 368, per=4.30%, avg=244.80, stdev=42.20, samples=20 00:35:04.581 iops : min= 44, max= 92, avg=61.20, stdev=10.55, samples=20 00:35:04.581 lat (msec) : 250=43.63%, 500=56.37% 00:35:04.581 cpu : usr=98.75%, sys=0.87%, ctx=10, majf=0, minf=27 00:35:04.581 IO depths : 1=0.5%, 2=3.0%, 4=13.5%, 8=70.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:04.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.581 issued rwts: total=628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.581 filename2: (groupid=0, jobs=1): err= 0: pid=799487: Mon Oct 14 17:00:07 2024 00:35:04.581 read: IOPS=57, BW=230KiB/s (235kB/s)(2312KiB/10055msec) 00:35:04.581 slat (nsec): min=4301, max=18900, avg=9025.06, stdev=2023.93 00:35:04.582 clat (msec): min=81, max=607, avg=278.03, stdev=71.17 00:35:04.582 lat (msec): min=82, max=607, avg=278.04, stdev=71.17 00:35:04.582 clat percentiles (msec): 00:35:04.582 | 1.00th=[ 83], 5.00th=[ 220], 10.00th=[ 224], 20.00th=[ 241], 00:35:04.582 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 266], 00:35:04.582 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 376], 95.00th=[ 435], 00:35:04.582 | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 609], 99.95th=[ 609], 00:35:04.582 | 99.99th=[ 609] 00:35:04.582 bw ( KiB/s): min= 176, max= 336, per=4.16%, avg=236.63, stdev=40.12, samples=19 00:35:04.582 iops : min= 44, max= 84, avg=59.16, stdev=10.03, samples=19 00:35:04.582 lat (msec) : 100=1.04%, 250=35.29%, 500=60.90%, 750=2.77% 00:35:04.582 cpu : usr=98.72%, sys=0.90%, ctx=11, majf=0, minf=20 00:35:04.582 IO depths : 1=0.2%, 2=0.5%, 4=6.1%, 8=80.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:35:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 complete : 0=0.0%, 4=88.4%, 8=7.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=799488: Mon Oct 14 17:00:07 2024 00:35:04.582 read: IOPS=63, BW=253KiB/s (259kB/s)(2560KiB/10118msec) 00:35:04.582 slat (nsec): min=6598, max=39894, avg=10687.00, stdev=4876.58 00:35:04.582 clat (msec): min=168, max=301, avg=251.96, stdev=20.47 00:35:04.582 lat (msec): min=168, max=301, avg=251.97, stdev=20.46 00:35:04.582 clat percentiles (msec): 00:35:04.582 | 1.00th=[ 203], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 226], 00:35:04.582 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 264], 00:35:04.582 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.582 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 300], 99.95th=[ 300], 00:35:04.582 | 99.99th=[ 300] 00:35:04.582 bw ( KiB/s): min= 144, max= 368, per=4.39%, avg=249.60, stdev=44.47, samples=20 00:35:04.582 iops : min= 36, max= 92, avg=62.40, stdev=11.12, samples=20 00:35:04.582 lat (msec) : 250=40.00%, 500=60.00% 00:35:04.582 cpu : usr=98.76%, sys=0.86%, ctx=36, majf=0, minf=22 00:35:04.582 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=799489: Mon Oct 14 17:00:07 2024 00:35:04.582 read: IOPS=41, BW=166KiB/s (169kB/s)(1664KiB/10054msec) 00:35:04.582 slat (nsec): min=7281, max=31889, avg=9145.53, stdev=2792.23 00:35:04.582 clat (msec): min=219, max=549, avg=386.59, stdev=65.51 00:35:04.582 lat (msec): min=219, max=549, avg=386.60, stdev=65.51 00:35:04.582 clat percentiles (msec): 00:35:04.582 | 1.00th=[ 241], 5.00th=[ 268], 10.00th=[ 305], 20.00th=[ 351], 00:35:04.582 | 30.00th=[ 363], 40.00th=[ 376], 50.00th=[ 376], 60.00th=[ 380], 00:35:04.582 | 70.00th=[ 414], 80.00th=[ 430], 90.00th=[ 435], 95.00th=[ 542], 00:35:04.582 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:35:04.582 | 99.99th=[ 550] 00:35:04.582 bw ( KiB/s): min= 128, max= 256, per=2.96%, avg=168.42, stdev=59.48, samples=19 00:35:04.582 iops : min= 32, max= 64, avg=42.11, stdev=14.87, samples=19 00:35:04.582 lat (msec) : 250=2.88%, 500=88.94%, 750=8.17% 00:35:04.582 cpu : usr=98.86%, sys=0.76%, ctx=12, majf=0, minf=24 00:35:04.582 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.582 filename2: (groupid=0, jobs=1): err= 0: pid=799490: Mon Oct 14 17:00:07 2024 00:35:04.582 read: IOPS=65, BW=262KiB/s (268kB/s)(2648KiB/10109msec) 00:35:04.582 slat (nsec): min=6911, max=55295, avg=18415.07, stdev=6138.60 00:35:04.582 clat (msec): min=29, max=351, avg=243.32, stdev=56.08 00:35:04.582 lat (msec): min=29, max=351, avg=243.33, stdev=56.08 00:35:04.582 clat percentiles (msec): 00:35:04.582 | 1.00th=[ 31], 5.00th=[ 124], 10.00th=[ 220], 20.00th=[ 226], 00:35:04.582 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 264], 60.00th=[ 266], 00:35:04.582 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 275], 00:35:04.582 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:35:04.582 | 99.99th=[ 351] 00:35:04.582 bw ( KiB/s): min= 176, max= 512, per=4.55%, avg=258.40, stdev=67.14, samples=20 00:35:04.582 iops : min= 44, max= 128, avg=64.60, stdev=16.78, samples=20 00:35:04.582 lat (msec) : 50=4.83%, 250=32.63%, 500=62.54% 00:35:04.582 cpu : usr=98.47%, sys=1.13%, ctx=14, majf=0, minf=31 00:35:04.582 IO depths : 1=0.6%, 2=2.1%, 4=10.7%, 8=74.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:04.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 complete : 0=0.0%, 4=90.0%, 8=4.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:04.582 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:04.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:04.582 00:35:04.582 Run status group 0 (all jobs): 00:35:04.582 READ: bw=5670KiB/s (5806kB/s), 166KiB/s-264KiB/s (169kB/s-271kB/s), io=56.1MiB (58.8MB), run=10054-10128msec 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 bdev_null0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.582 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 [2024-10-14 17:00:08.103426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 bdev_null1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:04.583 { 00:35:04.583 "params": { 00:35:04.583 "name": "Nvme$subsystem", 00:35:04.583 "trtype": "$TEST_TRANSPORT", 00:35:04.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.583 "adrfam": "ipv4", 00:35:04.583 "trsvcid": "$NVMF_PORT", 00:35:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.583 "hdgst": ${hdgst:-false}, 00:35:04.583 "ddgst": ${ddgst:-false} 00:35:04.583 }, 00:35:04.583 "method": "bdev_nvme_attach_controller" 00:35:04.583 } 00:35:04.583 EOF 00:35:04.583 )") 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:04.583 { 00:35:04.583 "params": { 00:35:04.583 "name": "Nvme$subsystem", 00:35:04.583 "trtype": "$TEST_TRANSPORT", 00:35:04.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:04.583 "adrfam": "ipv4", 00:35:04.583 "trsvcid": "$NVMF_PORT", 00:35:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:04.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:04.583 "hdgst": ${hdgst:-false}, 00:35:04.583 "ddgst": ${ddgst:-false} 00:35:04.583 }, 00:35:04.583 "method": "bdev_nvme_attach_controller" 00:35:04.583 } 00:35:04.583 EOF 00:35:04.583 )") 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:04.583 "params": { 00:35:04.583 "name": "Nvme0", 00:35:04.583 "trtype": "tcp", 00:35:04.583 "traddr": "10.0.0.2", 00:35:04.583 "adrfam": "ipv4", 00:35:04.583 "trsvcid": "4420", 00:35:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:04.583 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:04.583 "hdgst": false, 00:35:04.583 "ddgst": false 00:35:04.583 }, 00:35:04.583 "method": "bdev_nvme_attach_controller" 00:35:04.583 },{ 00:35:04.583 "params": { 00:35:04.583 "name": "Nvme1", 00:35:04.583 "trtype": "tcp", 00:35:04.583 "traddr": "10.0.0.2", 00:35:04.583 "adrfam": "ipv4", 00:35:04.583 "trsvcid": "4420", 00:35:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:04.583 "hdgst": false, 00:35:04.583 "ddgst": false 00:35:04.583 }, 00:35:04.583 "method": "bdev_nvme_attach_controller" 00:35:04.583 }' 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:04.583 17:00:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:04.583 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:04.583 ... 00:35:04.583 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:04.583 ... 00:35:04.583 fio-3.35 00:35:04.583 Starting 4 threads 00:35:09.981 00:35:09.981 filename0: (groupid=0, jobs=1): err= 0: pid=801822: Mon Oct 14 17:00:14 2024 00:35:09.981 read: IOPS=2925, BW=22.9MiB/s (24.0MB/s)(114MiB/5002msec) 00:35:09.981 slat (nsec): min=5934, max=48880, avg=8819.28, stdev=3133.62 00:35:09.981 clat (usec): min=643, max=5017, avg=2705.92, stdev=385.98 00:35:09.981 lat (usec): min=654, max=5030, avg=2714.74, stdev=385.82 00:35:09.981 clat percentiles (usec): 00:35:09.981 | 1.00th=[ 1631], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2409], 00:35:09.981 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2737], 60.00th=[ 2900], 00:35:09.981 | 70.00th=[ 2933], 80.00th=[ 2933], 90.00th=[ 3064], 95.00th=[ 3228], 00:35:09.981 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4621], 99.95th=[ 4686], 00:35:09.981 | 99.99th=[ 4883] 00:35:09.981 bw ( KiB/s): min=22048, max=24752, per=27.30%, avg=23362.00, stdev=947.72, samples=9 00:35:09.981 iops : min= 2756, max= 3094, avg=2920.22, stdev=118.49, samples=9 00:35:09.981 lat (usec) : 750=0.01%, 1000=0.10% 00:35:09.981 lat (msec) : 2=3.02%, 4=96.36%, 10=0.51% 00:35:09.981 cpu : usr=95.38%, sys=4.28%, ctx=9, majf=0, minf=9 00:35:09.981 IO depths : 1=0.6%, 2=9.8%, 4=62.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.981 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.981 issued rwts: total=14633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:09.981 filename0: (groupid=0, jobs=1): err= 0: pid=801823: Mon Oct 14 17:00:14 2024 00:35:09.981 read: IOPS=2588, BW=20.2MiB/s (21.2MB/s)(102MiB/5041msec) 00:35:09.981 slat (nsec): min=5969, max=39068, avg=8520.00, stdev=3031.38 00:35:09.981 clat (usec): min=648, max=41055, avg=3050.50, stdev=723.13 00:35:09.981 lat (usec): min=658, max=41068, avg=3059.02, stdev=723.01 00:35:09.981 clat percentiles (usec): 00:35:09.981 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2835], 00:35:09.981 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:35:09.981 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3621], 95.00th=[ 3884], 00:35:09.981 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5407], 00:35:09.981 | 99.99th=[41157] 00:35:09.981 bw ( KiB/s): min=20224, max=21680, per=24.39%, avg=20872.00, stdev=442.84, samples=10 00:35:09.981 iops : min= 2528, max= 2710, avg=2609.00, stdev=55.36, samples=10 00:35:09.981 lat (usec) : 750=0.01%, 1000=0.02% 00:35:09.981 lat (msec) : 2=0.66%, 4=95.11%, 10=4.18%, 50=0.02% 00:35:09.981 cpu : usr=96.11%, sys=3.57%, ctx=6, majf=0, minf=9 00:35:09.981 IO depths : 1=0.1%, 2=2.7%, 4=69.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.981 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.981 issued rwts: total=13048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:09.981 filename1: (groupid=0, jobs=1): err= 0: pid=801824: Mon Oct 14 17:00:14 2024 00:35:09.981 read: IOPS=2686, BW=21.0MiB/s (22.0MB/s)(105MiB/5001msec) 00:35:09.981 slat (nsec): min=5973, max=42831, avg=8789.84, stdev=3190.09 00:35:09.981 clat (usec): min=706, max=5455, avg=2951.93, stdev=427.44 00:35:09.981 lat (usec): min=718, max=5468, avg=2960.72, stdev=427.17 00:35:09.981 clat percentiles (usec): 00:35:09.981 | 1.00th=[ 1893], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:35:09.981 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:35:09.981 | 70.00th=[ 2999], 80.00th=[ 3163], 90.00th=[ 3490], 95.00th=[ 3720], 00:35:09.981 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 5145], 99.95th=[ 5407], 00:35:09.981 | 99.99th=[ 5473] 00:35:09.981 bw ( KiB/s): min=20528, max=22192, per=25.10%, avg=21479.11, stdev=559.38, samples=9 00:35:09.981 iops : min= 2566, max= 2774, avg=2684.89, stdev=69.92, samples=9 00:35:09.981 lat (usec) : 750=0.01%, 1000=0.02% 00:35:09.981 lat (msec) : 2=1.44%, 4=95.77%, 10=2.76% 00:35:09.981 cpu : usr=95.78%, sys=3.92%, ctx=8, majf=0, minf=9 00:35:09.981 IO depths : 1=0.3%, 2=3.9%, 4=68.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.981 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.981 issued rwts: total=13434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.981 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:09.981 filename1: (groupid=0, jobs=1): err= 0: pid=801825: Mon Oct 14 17:00:14 2024 00:35:09.981 read: IOPS=2561, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:35:09.981 slat (nsec): min=5941, max=35097, avg=8519.27, stdev=3051.96 00:35:09.981 clat (usec): min=652, max=5552, avg=3098.48, stdev=484.98 00:35:09.981 lat (usec): min=661, max=5558, avg=3107.00, stdev=484.77 00:35:09.981 clat percentiles (usec): 00:35:09.981 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2868], 00:35:09.981 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:35:09.981 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3654], 95.00th=[ 4113], 00:35:09.981 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5407], 00:35:09.981 | 99.99th=[ 5538] 00:35:09.981 bw ( KiB/s): min=19527, max=21664, per=24.02%, avg=20551.89, stdev=713.30, samples=9 00:35:09.981 iops : min= 2440, max= 2708, avg=2568.89, stdev=89.32, samples=9 00:35:09.981 lat (usec) : 750=0.04%, 1000=0.02% 00:35:09.982 lat (msec) : 2=0.39%, 4=93.51%, 10=6.04% 00:35:09.982 cpu : usr=96.06%, sys=3.60%, ctx=7, majf=0, minf=9 00:35:09.982 IO depths : 1=0.1%, 2=2.3%, 4=70.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.982 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.982 issued rwts: total=12808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:09.982 00:35:09.982 Run status group 0 (all jobs): 00:35:09.982 READ: bw=83.6MiB/s (87.6MB/s), 20.0MiB/s-22.9MiB/s (21.0MB/s-24.0MB/s), io=421MiB (442MB), run=5001-5041msec 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.982 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 00:35:10.241 real 0m24.590s 00:35:10.241 user 4m53.783s 00:35:10.241 sys 0m4.820s 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 ************************************ 00:35:10.241 END TEST fio_dif_rand_params 00:35:10.241 ************************************ 00:35:10.241 17:00:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:10.241 17:00:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:10.241 17:00:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 ************************************ 00:35:10.241 START TEST fio_dif_digest 00:35:10.241 ************************************ 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 bdev_null0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.241 [2024-10-14 17:00:14.747244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.241 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:10.241 { 00:35:10.241 "params": { 00:35:10.241 "name": "Nvme$subsystem", 00:35:10.241 "trtype": "$TEST_TRANSPORT", 00:35:10.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.241 "adrfam": "ipv4", 00:35:10.241 "trsvcid": "$NVMF_PORT", 00:35:10.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.241 "hdgst": ${hdgst:-false}, 00:35:10.241 "ddgst": ${ddgst:-false} 00:35:10.242 }, 00:35:10.242 "method": "bdev_nvme_attach_controller" 00:35:10.242 } 00:35:10.242 EOF 00:35:10.242 )") 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:10.242 "params": { 00:35:10.242 "name": "Nvme0", 00:35:10.242 "trtype": "tcp", 00:35:10.242 "traddr": "10.0.0.2", 00:35:10.242 "adrfam": "ipv4", 00:35:10.242 "trsvcid": "4420", 00:35:10.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.242 "hdgst": true, 00:35:10.242 "ddgst": true 00:35:10.242 }, 00:35:10.242 "method": "bdev_nvme_attach_controller" 00:35:10.242 }' 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.242 17:00:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.500 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:10.500 ... 00:35:10.500 fio-3.35 00:35:10.500 Starting 3 threads 00:35:22.710 00:35:22.710 filename0: (groupid=0, jobs=1): err= 0: pid=803216: Mon Oct 14 17:00:25 2024 00:35:22.710 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(375MiB/10047msec) 00:35:22.710 slat (nsec): min=6253, max=36416, avg=11811.27, stdev=2052.30 00:35:22.710 clat (usec): min=7356, max=49127, avg=10025.43, stdev=1205.46 00:35:22.710 lat (usec): min=7369, max=49140, avg=10037.24, stdev=1205.47 00:35:22.710 clat percentiles (usec): 00:35:22.710 | 1.00th=[ 8291], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:35:22.710 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:35:22.710 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:35:22.711 | 99.00th=[11600], 99.50th=[11863], 99.90th=[12256], 99.95th=[46924], 00:35:22.711 | 99.99th=[49021] 00:35:22.711 bw ( KiB/s): min=37120, max=39680, per=35.25%, avg=38339.80, stdev=625.39, samples=20 00:35:22.711 iops : min= 290, max= 310, avg=299.50, stdev= 4.89, samples=20 00:35:22.711 lat (msec) : 10=49.63%, 20=50.30%, 50=0.07% 00:35:22.711 cpu : usr=94.58%, sys=5.10%, ctx=87, majf=0, minf=56 00:35:22.711 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.711 issued rwts: total=2998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:22.711 filename0: (groupid=0, jobs=1): err= 0: pid=803217: Mon Oct 14 17:00:25 2024 00:35:22.711 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10044msec) 00:35:22.711 slat (nsec): min=6378, max=54608, avg=11677.71, stdev=2035.02 00:35:22.711 clat (usec): min=8179, max=50856, avg=10679.31, stdev=1269.35 00:35:22.711 lat (usec): min=8191, max=50868, avg=10690.99, stdev=1269.40 00:35:22.711 clat percentiles (usec): 00:35:22.711 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:35:22.711 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:35:22.711 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:35:22.711 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13566], 99.95th=[47973], 00:35:22.711 | 99.99th=[51119] 00:35:22.711 bw ( KiB/s): min=34816, max=37120, per=33.09%, avg=35993.60, stdev=583.77, samples=20 00:35:22.711 iops : min= 272, max= 290, avg=281.20, stdev= 4.56, samples=20 00:35:22.711 lat (msec) : 10=17.73%, 20=82.20%, 50=0.04%, 100=0.04% 00:35:22.711 cpu : usr=94.67%, sys=5.02%, ctx=20, majf=0, minf=79 00:35:22.711 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.711 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:22.711 filename0: (groupid=0, jobs=1): err= 0: pid=803218: Mon Oct 14 17:00:25 2024 00:35:22.711 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10043msec) 00:35:22.711 slat (nsec): min=6305, max=26266, avg=11624.07, stdev=1674.17 00:35:22.711 clat (usec): min=8398, max=49394, avg=11027.86, stdev=1254.80 00:35:22.711 lat (usec): min=8405, max=49406, avg=11039.48, stdev=1254.82 00:35:22.711 clat percentiles (usec): 00:35:22.711 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10421], 00:35:22.711 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:35:22.711 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:35:22.711 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14877], 99.95th=[47449], 00:35:22.711 | 99.99th=[49546] 00:35:22.711 bw ( KiB/s): min=34048, max=35584, per=32.05%, avg=34854.40, stdev=500.24, samples=20 00:35:22.711 iops : min= 266, max= 278, avg=272.30, stdev= 3.91, samples=20 00:35:22.711 lat (msec) : 10=7.67%, 20=92.26%, 50=0.07% 00:35:22.711 cpu : usr=94.19%, sys=5.51%, ctx=19, majf=0, minf=47 00:35:22.711 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.711 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.711 issued rwts: total=2725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.711 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:22.711 00:35:22.711 Run status group 0 (all jobs): 00:35:22.711 READ: bw=106MiB/s (111MB/s), 33.9MiB/s-37.3MiB/s (35.6MB/s-39.1MB/s), io=1067MiB (1119MB), run=10043-10047msec 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.711 00:35:22.711 real 0m11.157s 00:35:22.711 user 0m35.393s 00:35:22.711 sys 0m1.903s 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:22.711 17:00:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:22.711 ************************************ 00:35:22.711 END TEST fio_dif_digest 00:35:22.711 ************************************ 00:35:22.711 17:00:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:22.711 17:00:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.711 rmmod nvme_tcp 00:35:22.711 rmmod nvme_fabrics 00:35:22.711 rmmod nvme_keyring 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 794110 ']' 00:35:22.711 17:00:25 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 794110 00:35:22.711 17:00:25 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 794110 ']' 00:35:22.711 17:00:25 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 794110 00:35:22.711 17:00:25 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:22.711 17:00:25 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.711 17:00:25 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 794110 00:35:22.711 17:00:26 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:22.711 17:00:26 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:22.711 17:00:26 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 794110' 00:35:22.711 killing process with pid 794110 00:35:22.711 17:00:26 nvmf_dif -- common/autotest_common.sh@969 -- # kill 794110 00:35:22.711 17:00:26 nvmf_dif -- common/autotest_common.sh@974 -- # wait 794110 00:35:22.711 17:00:26 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:35:22.711 17:00:26 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:24.617 Waiting for block devices as requested 00:35:24.617 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:24.617 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:24.617 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:24.617 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:24.617 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:24.876 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:24.876 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:24.876 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:25.136 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:25.136 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:25.136 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:25.395 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:25.395 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:25.395 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:25.395 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:25.655 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:25.655 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:25.655 17:00:30 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.655 17:00:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:25.655 17:00:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.191 17:00:32 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:28.191 00:35:28.191 real 1m14.352s 00:35:28.191 user 7m12.165s 00:35:28.191 sys 0m20.206s 00:35:28.191 17:00:32 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:28.191 17:00:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:28.191 ************************************ 00:35:28.191 END TEST nvmf_dif 00:35:28.191 ************************************ 00:35:28.191 17:00:32 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:28.191 17:00:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:28.191 17:00:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:28.191 17:00:32 -- common/autotest_common.sh@10 -- # set +x 00:35:28.191 ************************************ 00:35:28.191 START TEST nvmf_abort_qd_sizes 00:35:28.191 ************************************ 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:28.191 * Looking for test storage... 00:35:28.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.191 --rc genhtml_branch_coverage=1 00:35:28.191 --rc genhtml_function_coverage=1 00:35:28.191 --rc genhtml_legend=1 00:35:28.191 --rc geninfo_all_blocks=1 00:35:28.191 --rc geninfo_unexecuted_blocks=1 00:35:28.191 00:35:28.191 ' 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.191 --rc genhtml_branch_coverage=1 00:35:28.191 --rc genhtml_function_coverage=1 00:35:28.191 --rc genhtml_legend=1 00:35:28.191 --rc geninfo_all_blocks=1 00:35:28.191 --rc geninfo_unexecuted_blocks=1 00:35:28.191 00:35:28.191 ' 00:35:28.191 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.191 --rc genhtml_branch_coverage=1 00:35:28.191 --rc genhtml_function_coverage=1 00:35:28.191 --rc genhtml_legend=1 00:35:28.192 --rc geninfo_all_blocks=1 00:35:28.192 --rc geninfo_unexecuted_blocks=1 00:35:28.192 00:35:28.192 ' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.192 --rc genhtml_branch_coverage=1 00:35:28.192 --rc genhtml_function_coverage=1 00:35:28.192 --rc genhtml_legend=1 00:35:28.192 --rc geninfo_all_blocks=1 00:35:28.192 --rc geninfo_unexecuted_blocks=1 00:35:28.192 00:35:28.192 ' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:28.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.192 17:00:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:33.508 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:33.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:33.509 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:33.509 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:33.510 Found net devices under 0000:86:00.0: cvl_0_0 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:33.510 Found net devices under 0000:86:00.1: cvl_0_1 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:33.510 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:33.511 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:33.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:33.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:35:33.778 00:35:33.778 --- 10.0.0.2 ping statistics --- 00:35:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.778 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:33.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:33.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:35:33.778 00:35:33.778 --- 10.0.0.1 ping statistics --- 00:35:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.778 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:33.778 17:00:38 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:37.069 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.069 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:38.006 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=811028 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 811028 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 811028 ']' 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:38.265 17:00:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.265 [2024-10-14 17:00:42.784211] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:35:38.265 [2024-10-14 17:00:42.784253] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.265 [2024-10-14 17:00:42.856800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:38.265 [2024-10-14 17:00:42.900045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.265 [2024-10-14 17:00:42.900082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.265 [2024-10-14 17:00:42.900089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.265 [2024-10-14 17:00:42.900095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.265 [2024-10-14 17:00:42.900102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.525 [2024-10-14 17:00:42.901704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.525 [2024-10-14 17:00:42.901813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.525 [2024-10-14 17:00:42.901921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.525 [2024-10-14 17:00:42.901922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:38.525 17:00:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:38.525 ************************************ 00:35:38.525 START TEST spdk_target_abort 00:35:38.525 ************************************ 00:35:38.525 17:00:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:38.525 17:00:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:38.525 17:00:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:38.525 17:00:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:38.525 17:00:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.813 spdk_targetn1 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.813 [2024-10-14 17:00:45.919554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:41.813 [2024-10-14 17:00:45.953409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.813 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:41.814 17:00:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.100 Initializing NVMe Controllers 00:35:45.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:45.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:45.100 Initialization complete. Launching workers. 00:35:45.100 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16965, failed: 0 00:35:45.100 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1393, failed to submit 15572 00:35:45.100 success 770, unsuccessful 623, failed 0 00:35:45.100 17:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.100 17:00:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:48.389 Initializing NVMe Controllers 00:35:48.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:48.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:48.389 Initialization complete. Launching workers. 00:35:48.389 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8454, failed: 0 00:35:48.389 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 7199 00:35:48.389 success 320, unsuccessful 935, failed 0 00:35:48.389 17:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.389 17:00:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.676 Initializing NVMe Controllers 00:35:51.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:51.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:51.676 Initialization complete. Launching workers. 00:35:51.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38341, failed: 0 00:35:51.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2797, failed to submit 35544 00:35:51.676 success 613, unsuccessful 2184, failed 0 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:51.676 17:00:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 811028 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 811028 ']' 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 811028 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 811028 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 811028' 00:35:53.055 killing process with pid 811028 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 811028 00:35:53.055 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 811028 00:35:53.315 00:35:53.315 real 0m14.709s 00:35:53.315 user 0m56.135s 00:35:53.315 sys 0m2.645s 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:53.315 ************************************ 00:35:53.315 END TEST spdk_target_abort 00:35:53.315 ************************************ 00:35:53.315 17:00:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:53.315 17:00:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:53.315 17:00:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:53.315 17:00:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.315 ************************************ 00:35:53.315 START TEST kernel_target_abort 00:35:53.315 ************************************ 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:53.315 17:00:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:56.603 Waiting for block devices as requested 00:35:56.603 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:56.603 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:56.603 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:56.603 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:56.603 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:56.603 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:56.603 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:56.603 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:56.862 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:56.862 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:56.862 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:57.120 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:57.120 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:57.120 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:57.120 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:57.379 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:57.379 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:57.379 17:01:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:57.636 No valid GPT data, bailing 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:57.636 00:35:57.636 Discovery Log Number of Records 2, Generation counter 2 00:35:57.636 =====Discovery Log Entry 0====== 00:35:57.636 trtype: tcp 00:35:57.636 adrfam: ipv4 00:35:57.636 subtype: current discovery subsystem 00:35:57.636 treq: not specified, sq flow control disable supported 00:35:57.636 portid: 1 00:35:57.636 trsvcid: 4420 00:35:57.636 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:57.636 traddr: 10.0.0.1 00:35:57.636 eflags: none 00:35:57.636 sectype: none 00:35:57.636 =====Discovery Log Entry 1====== 00:35:57.636 trtype: tcp 00:35:57.636 adrfam: ipv4 00:35:57.636 subtype: nvme subsystem 00:35:57.636 treq: not specified, sq flow control disable supported 00:35:57.636 portid: 1 00:35:57.636 trsvcid: 4420 00:35:57.636 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:57.636 traddr: 10.0.0.1 00:35:57.636 eflags: none 00:35:57.636 sectype: none 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:57.636 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:57.637 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:57.637 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:57.637 17:01:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.924 Initializing NVMe Controllers 00:36:00.924 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:00.924 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:00.924 Initialization complete. Launching workers. 00:36:00.924 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94974, failed: 0 00:36:00.924 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94974, failed to submit 0 00:36:00.924 success 0, unsuccessful 94974, failed 0 00:36:00.924 17:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.924 17:01:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.212 Initializing NVMe Controllers 00:36:04.212 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:04.212 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:04.212 Initialization complete. Launching workers. 00:36:04.212 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150434, failed: 0 00:36:04.212 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37854, failed to submit 112580 00:36:04.212 success 0, unsuccessful 37854, failed 0 00:36:04.212 17:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:04.212 17:01:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.499 Initializing NVMe Controllers 00:36:07.499 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.499 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.499 Initialization complete. Launching workers. 00:36:07.499 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139893, failed: 0 00:36:07.499 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35022, failed to submit 104871 00:36:07.499 success 0, unsuccessful 35022, failed 0 00:36:07.499 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:07.499 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:07.499 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:36:07.499 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:07.499 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:07.499 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:07.500 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:07.500 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:36:07.500 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:36:07.500 17:01:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:10.036 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:10.036 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:11.415 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:11.415 00:36:11.415 real 0m18.000s 00:36:11.415 user 0m9.136s 00:36:11.415 sys 0m5.062s 00:36:11.415 17:01:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:11.415 17:01:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.415 ************************************ 00:36:11.415 END TEST kernel_target_abort 00:36:11.415 ************************************ 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:11.415 rmmod nvme_tcp 00:36:11.415 rmmod nvme_fabrics 00:36:11.415 rmmod nvme_keyring 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 811028 ']' 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 811028 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 811028 ']' 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 811028 00:36:11.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (811028) - No such process 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 811028 is not found' 00:36:11.415 Process with pid 811028 is not found 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:11.415 17:01:15 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:14.706 Waiting for block devices as requested 00:36:14.706 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:14.706 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:14.706 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:14.706 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:14.706 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:14.706 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:14.706 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:14.706 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:14.965 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:14.965 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:14.965 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:15.224 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:15.224 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:15.224 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:15.224 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:15.483 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:15.483 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:15.483 17:01:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.537 17:01:22 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:17.538 00:36:17.538 real 0m49.720s 00:36:17.538 user 1m9.637s 00:36:17.538 sys 0m16.291s 00:36:17.538 17:01:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:17.538 17:01:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.538 ************************************ 00:36:17.538 END TEST nvmf_abort_qd_sizes 00:36:17.538 ************************************ 00:36:17.538 17:01:22 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:17.538 17:01:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:17.538 17:01:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:17.538 17:01:22 -- common/autotest_common.sh@10 -- # set +x 00:36:17.797 ************************************ 00:36:17.797 START TEST keyring_file 00:36:17.797 ************************************ 00:36:17.797 17:01:22 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:17.797 * Looking for test storage... 00:36:17.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:17.797 17:01:22 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:17.797 17:01:22 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:36:17.797 17:01:22 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:17.797 17:01:22 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:17.797 17:01:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.797 17:01:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.797 17:01:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.797 17:01:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:17.798 17:01:22 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.798 17:01:22 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.798 --rc genhtml_branch_coverage=1 00:36:17.798 --rc genhtml_function_coverage=1 00:36:17.798 --rc genhtml_legend=1 00:36:17.798 --rc geninfo_all_blocks=1 00:36:17.798 --rc geninfo_unexecuted_blocks=1 00:36:17.798 00:36:17.798 ' 00:36:17.798 17:01:22 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.798 --rc genhtml_branch_coverage=1 00:36:17.798 --rc genhtml_function_coverage=1 00:36:17.798 --rc genhtml_legend=1 00:36:17.798 --rc geninfo_all_blocks=1 00:36:17.798 --rc geninfo_unexecuted_blocks=1 00:36:17.798 00:36:17.798 ' 00:36:17.798 17:01:22 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.798 --rc genhtml_branch_coverage=1 00:36:17.798 --rc genhtml_function_coverage=1 00:36:17.798 --rc genhtml_legend=1 00:36:17.798 --rc geninfo_all_blocks=1 00:36:17.798 --rc geninfo_unexecuted_blocks=1 00:36:17.798 00:36:17.798 ' 00:36:17.798 17:01:22 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:17.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.798 --rc genhtml_branch_coverage=1 00:36:17.798 --rc genhtml_function_coverage=1 00:36:17.798 --rc genhtml_legend=1 00:36:17.798 --rc geninfo_all_blocks=1 00:36:17.798 --rc geninfo_unexecuted_blocks=1 00:36:17.798 00:36:17.798 ' 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.798 17:01:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.798 17:01:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.798 17:01:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.798 17:01:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.798 17:01:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:17.798 17:01:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:17.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:17.798 17:01:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2kODbLUQiK 00:36:17.798 17:01:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:17.798 17:01:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2kODbLUQiK 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2kODbLUQiK 00:36:18.057 17:01:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.2kODbLUQiK 00:36:18.057 17:01:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ijfQqJVaJ6 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:18.057 17:01:22 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:18.057 17:01:22 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:18.057 17:01:22 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:18.057 17:01:22 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:36:18.057 17:01:22 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:18.057 17:01:22 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ijfQqJVaJ6 00:36:18.057 17:01:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ijfQqJVaJ6 00:36:18.057 17:01:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ijfQqJVaJ6 00:36:18.057 17:01:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=819817 00:36:18.057 17:01:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 819817 00:36:18.057 17:01:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:18.057 17:01:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 819817 ']' 00:36:18.057 17:01:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.057 17:01:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:18.057 17:01:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.057 17:01:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:18.057 17:01:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.057 [2024-10-14 17:01:22.566120] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:36:18.057 [2024-10-14 17:01:22.566169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819817 ] 00:36:18.057 [2024-10-14 17:01:22.635138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.057 [2024-10-14 17:01:22.676952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.316 17:01:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:18.316 17:01:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:18.316 17:01:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:18.316 17:01:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.316 17:01:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.316 [2024-10-14 17:01:22.894305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.317 null0 00:36:18.317 [2024-10-14 17:01:22.926360] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:18.317 [2024-10-14 17:01:22.926696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.317 17:01:22 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:18.317 17:01:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.576 [2024-10-14 17:01:22.954425] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:18.576 request: 00:36:18.576 { 00:36:18.576 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:18.576 "secure_channel": false, 00:36:18.576 "listen_address": { 00:36:18.576 "trtype": "tcp", 00:36:18.576 "traddr": "127.0.0.1", 00:36:18.576 "trsvcid": "4420" 00:36:18.576 }, 00:36:18.576 "method": "nvmf_subsystem_add_listener", 00:36:18.576 "req_id": 1 00:36:18.576 } 00:36:18.576 Got JSON-RPC error response 00:36:18.576 response: 00:36:18.576 { 00:36:18.576 "code": -32602, 00:36:18.576 "message": "Invalid parameters" 00:36:18.576 } 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:18.576 17:01:22 keyring_file -- keyring/file.sh@47 -- # bperfpid=819825 00:36:18.576 17:01:22 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:18.576 17:01:22 keyring_file -- keyring/file.sh@49 -- # waitforlisten 819825 /var/tmp/bperf.sock 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 819825 ']' 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:18.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:18.576 17:01:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:18.576 [2024-10-14 17:01:23.000466] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:36:18.576 [2024-10-14 17:01:23.000506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid819825 ] 00:36:18.576 [2024-10-14 17:01:23.068901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.576 [2024-10-14 17:01:23.110220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.576 17:01:23 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:18.576 17:01:23 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:18.576 17:01:23 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:18.576 17:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:18.835 17:01:23 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ijfQqJVaJ6 00:36:18.835 17:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ijfQqJVaJ6 00:36:19.093 17:01:23 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:19.093 17:01:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:19.093 17:01:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.093 17:01:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.093 17:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.352 17:01:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.2kODbLUQiK == \/\t\m\p\/\t\m\p\.\2\k\O\D\b\L\U\Q\i\K ]] 00:36:19.352 17:01:23 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:19.352 17:01:23 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:19.352 17:01:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.352 17:01:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.352 17:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.611 17:01:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ijfQqJVaJ6 == \/\t\m\p\/\t\m\p\.\i\j\f\Q\q\J\V\a\J\6 ]] 00:36:19.611 17:01:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:19.611 17:01:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:19.611 17:01:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.611 17:01:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.611 17:01:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.611 17:01:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:19.611 17:01:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:19.611 17:01:24 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:19.611 17:01:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:19.611 17:01:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:19.611 17:01:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:19.611 17:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.611 17:01:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:19.870 17:01:24 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:19.870 17:01:24 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.870 17:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.127 [2024-10-14 17:01:24.552005] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:20.127 nvme0n1 00:36:20.127 17:01:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:20.127 17:01:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:20.127 17:01:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.127 17:01:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.127 17:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.127 17:01:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:20.386 17:01:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:20.386 17:01:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:20.386 17:01:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:20.386 17:01:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:20.386 17:01:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:20.386 17:01:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:20.386 17:01:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.644 17:01:25 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:20.644 17:01:25 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.644 Running I/O for 1 seconds... 00:36:21.581 19447.00 IOPS, 75.96 MiB/s 00:36:21.581 Latency(us) 00:36:21.581 [2024-10-14T15:01:26.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.581 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:21.581 nvme0n1 : 1.00 19494.57 76.15 0.00 0.00 6554.39 3651.29 10985.08 00:36:21.581 [2024-10-14T15:01:26.215Z] =================================================================================================================== 00:36:21.581 [2024-10-14T15:01:26.215Z] Total : 19494.57 76.15 0.00 0.00 6554.39 3651.29 10985.08 00:36:21.581 { 00:36:21.581 "results": [ 00:36:21.581 { 00:36:21.581 "job": "nvme0n1", 00:36:21.581 "core_mask": "0x2", 00:36:21.581 "workload": "randrw", 00:36:21.581 "percentage": 50, 00:36:21.581 "status": "finished", 00:36:21.581 "queue_depth": 128, 00:36:21.581 "io_size": 4096, 00:36:21.581 "runtime": 1.004126, 00:36:21.581 "iops": 19494.565423064436, 00:36:21.581 "mibps": 76.15064618384545, 00:36:21.581 "io_failed": 0, 00:36:21.581 "io_timeout": 0, 00:36:21.581 "avg_latency_us": 6554.387563923858, 00:36:21.581 "min_latency_us": 3651.2914285714287, 00:36:21.581 "max_latency_us": 10985.081904761904 00:36:21.581 } 00:36:21.581 ], 00:36:21.581 "core_count": 1 00:36:21.581 } 00:36:21.581 17:01:26 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:21.581 17:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:21.840 17:01:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:21.840 17:01:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:21.840 17:01:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:21.840 17:01:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.840 17:01:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.840 17:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.099 17:01:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:22.099 17:01:26 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:22.099 17:01:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.099 17:01:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.099 17:01:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.099 17:01:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.099 17:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.358 17:01:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:22.358 17:01:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.358 17:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:22.358 [2024-10-14 17:01:26.918746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:22.358 [2024-10-14 17:01:26.919272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1494430 (107): Transport endpoint is not connected 00:36:22.358 [2024-10-14 17:01:26.920266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1494430 (9): Bad file descriptor 00:36:22.358 [2024-10-14 17:01:26.921267] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:22.358 [2024-10-14 17:01:26.921278] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:22.358 [2024-10-14 17:01:26.921286] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:22.358 [2024-10-14 17:01:26.921295] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:22.358 request: 00:36:22.358 { 00:36:22.358 "name": "nvme0", 00:36:22.358 "trtype": "tcp", 00:36:22.358 "traddr": "127.0.0.1", 00:36:22.358 "adrfam": "ipv4", 00:36:22.358 "trsvcid": "4420", 00:36:22.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.358 "prchk_reftag": false, 00:36:22.358 "prchk_guard": false, 00:36:22.358 "hdgst": false, 00:36:22.358 "ddgst": false, 00:36:22.358 "psk": "key1", 00:36:22.358 "allow_unrecognized_csi": false, 00:36:22.358 "method": "bdev_nvme_attach_controller", 00:36:22.358 "req_id": 1 00:36:22.358 } 00:36:22.358 Got JSON-RPC error response 00:36:22.358 response: 00:36:22.358 { 00:36:22.358 "code": -5, 00:36:22.358 "message": "Input/output error" 00:36:22.358 } 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:22.358 17:01:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:22.358 17:01:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:22.358 17:01:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.358 17:01:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.358 17:01:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.358 17:01:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.359 17:01:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.618 17:01:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:22.618 17:01:27 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:22.618 17:01:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.618 17:01:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.618 17:01:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.618 17:01:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.618 17:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.876 17:01:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:22.876 17:01:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:22.876 17:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:23.135 17:01:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:23.135 17:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:23.135 17:01:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:23.135 17:01:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:23.135 17:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.394 17:01:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:23.394 17:01:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.2kODbLUQiK 00:36:23.394 17:01:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.394 17:01:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:23.394 17:01:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:23.653 [2024-10-14 17:01:28.093415] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2kODbLUQiK': 0100660 00:36:23.653 [2024-10-14 17:01:28.093441] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:23.653 request: 00:36:23.653 { 00:36:23.653 "name": "key0", 00:36:23.653 "path": "/tmp/tmp.2kODbLUQiK", 00:36:23.653 "method": "keyring_file_add_key", 00:36:23.653 "req_id": 1 00:36:23.653 } 00:36:23.653 Got JSON-RPC error response 00:36:23.653 response: 00:36:23.653 { 00:36:23.653 "code": -1, 00:36:23.653 "message": "Operation not permitted" 00:36:23.653 } 00:36:23.653 17:01:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:23.653 17:01:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:23.653 17:01:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:23.653 17:01:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:23.653 17:01:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.2kODbLUQiK 00:36:23.653 17:01:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:23.653 17:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2kODbLUQiK 00:36:23.911 17:01:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.2kODbLUQiK 00:36:23.911 17:01:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:23.911 17:01:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:23.911 17:01:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.911 17:01:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.911 17:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.911 17:01:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:23.911 17:01:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:23.912 17:01:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:23.912 17:01:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:23.912 17:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.170 [2024-10-14 17:01:28.682970] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.2kODbLUQiK': No such file or directory 00:36:24.170 [2024-10-14 17:01:28.682992] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:24.170 [2024-10-14 17:01:28.683008] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:24.170 [2024-10-14 17:01:28.683015] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:24.170 [2024-10-14 17:01:28.683022] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:24.170 [2024-10-14 17:01:28.683028] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:24.170 request: 00:36:24.170 { 00:36:24.170 "name": "nvme0", 00:36:24.170 "trtype": "tcp", 00:36:24.170 "traddr": "127.0.0.1", 00:36:24.170 "adrfam": "ipv4", 00:36:24.170 "trsvcid": "4420", 00:36:24.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.170 "prchk_reftag": false, 00:36:24.170 "prchk_guard": false, 00:36:24.170 "hdgst": false, 00:36:24.170 "ddgst": false, 00:36:24.170 "psk": "key0", 00:36:24.170 "allow_unrecognized_csi": false, 00:36:24.170 "method": "bdev_nvme_attach_controller", 00:36:24.170 "req_id": 1 00:36:24.170 } 00:36:24.170 Got JSON-RPC error response 00:36:24.170 response: 00:36:24.170 { 00:36:24.170 "code": -19, 00:36:24.170 "message": "No such device" 00:36:24.170 } 00:36:24.170 17:01:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:24.170 17:01:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:24.170 17:01:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:24.170 17:01:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:24.170 17:01:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:24.170 17:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:24.429 17:01:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.JaIx4VaIJa 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:24.429 17:01:28 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:24.429 17:01:28 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:24.429 17:01:28 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:24.429 17:01:28 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:24.429 17:01:28 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:24.429 17:01:28 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.JaIx4VaIJa 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.JaIx4VaIJa 00:36:24.429 17:01:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.JaIx4VaIJa 00:36:24.429 17:01:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JaIx4VaIJa 00:36:24.429 17:01:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JaIx4VaIJa 00:36:24.688 17:01:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.688 17:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:24.947 nvme0n1 00:36:24.947 17:01:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:24.947 17:01:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:24.947 17:01:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:24.947 17:01:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.947 17:01:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:24.947 17:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.947 17:01:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:24.947 17:01:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:24.947 17:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:25.205 17:01:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:25.205 17:01:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:25.205 17:01:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.205 17:01:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.205 17:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.465 17:01:29 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:25.465 17:01:29 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:25.465 17:01:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.465 17:01:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.465 17:01:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.465 17:01:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.465 17:01:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.723 17:01:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:25.723 17:01:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:25.723 17:01:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:25.723 17:01:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:25.723 17:01:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:25.723 17:01:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.982 17:01:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:25.982 17:01:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.JaIx4VaIJa 00:36:25.982 17:01:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.JaIx4VaIJa 00:36:26.241 17:01:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ijfQqJVaJ6 00:36:26.241 17:01:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ijfQqJVaJ6 00:36:26.500 17:01:30 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.500 17:01:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.759 nvme0n1 00:36:26.759 17:01:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:26.759 17:01:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:27.018 17:01:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:27.018 "subsystems": [ 00:36:27.018 { 00:36:27.018 "subsystem": "keyring", 00:36:27.018 "config": [ 00:36:27.018 { 00:36:27.018 "method": "keyring_file_add_key", 00:36:27.018 "params": { 00:36:27.018 "name": "key0", 00:36:27.018 "path": "/tmp/tmp.JaIx4VaIJa" 00:36:27.018 } 00:36:27.018 }, 00:36:27.018 { 00:36:27.018 "method": "keyring_file_add_key", 00:36:27.018 "params": { 00:36:27.018 "name": "key1", 00:36:27.018 "path": "/tmp/tmp.ijfQqJVaJ6" 00:36:27.018 } 00:36:27.018 } 00:36:27.018 ] 00:36:27.018 }, 00:36:27.018 { 00:36:27.018 "subsystem": "iobuf", 00:36:27.018 "config": [ 00:36:27.018 { 00:36:27.018 "method": "iobuf_set_options", 00:36:27.018 "params": { 00:36:27.018 "small_pool_count": 8192, 00:36:27.018 "large_pool_count": 1024, 00:36:27.018 "small_bufsize": 8192, 00:36:27.018 "large_bufsize": 135168 00:36:27.018 } 00:36:27.018 } 00:36:27.018 ] 00:36:27.018 }, 00:36:27.018 { 00:36:27.018 "subsystem": "sock", 00:36:27.018 "config": [ 00:36:27.018 { 00:36:27.018 "method": "sock_set_default_impl", 00:36:27.018 "params": { 00:36:27.018 "impl_name": "posix" 00:36:27.018 } 00:36:27.018 }, 00:36:27.018 { 00:36:27.018 "method": "sock_impl_set_options", 00:36:27.018 "params": { 00:36:27.018 "impl_name": "ssl", 00:36:27.018 "recv_buf_size": 4096, 00:36:27.018 "send_buf_size": 4096, 00:36:27.018 "enable_recv_pipe": true, 00:36:27.018 "enable_quickack": false, 00:36:27.018 "enable_placement_id": 0, 00:36:27.018 "enable_zerocopy_send_server": true, 00:36:27.018 "enable_zerocopy_send_client": false, 00:36:27.018 "zerocopy_threshold": 0, 00:36:27.018 "tls_version": 0, 00:36:27.018 "enable_ktls": false 00:36:27.018 } 00:36:27.018 }, 00:36:27.018 { 00:36:27.018 "method": "sock_impl_set_options", 00:36:27.018 "params": { 00:36:27.018 "impl_name": "posix", 00:36:27.018 "recv_buf_size": 2097152, 00:36:27.018 "send_buf_size": 2097152, 00:36:27.018 "enable_recv_pipe": true, 00:36:27.018 "enable_quickack": false, 00:36:27.018 "enable_placement_id": 0, 00:36:27.018 "enable_zerocopy_send_server": true, 00:36:27.018 "enable_zerocopy_send_client": false, 00:36:27.018 "zerocopy_threshold": 0, 00:36:27.019 "tls_version": 0, 00:36:27.019 "enable_ktls": false 00:36:27.019 } 00:36:27.019 } 00:36:27.019 ] 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "subsystem": "vmd", 00:36:27.019 "config": [] 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "subsystem": "accel", 00:36:27.019 "config": [ 00:36:27.019 { 00:36:27.019 "method": "accel_set_options", 00:36:27.019 "params": { 00:36:27.019 "small_cache_size": 128, 00:36:27.019 "large_cache_size": 16, 00:36:27.019 "task_count": 2048, 00:36:27.019 "sequence_count": 2048, 00:36:27.019 "buf_count": 2048 00:36:27.019 } 00:36:27.019 } 00:36:27.019 ] 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "subsystem": "bdev", 00:36:27.019 "config": [ 00:36:27.019 { 00:36:27.019 "method": "bdev_set_options", 00:36:27.019 "params": { 00:36:27.019 "bdev_io_pool_size": 65535, 00:36:27.019 "bdev_io_cache_size": 256, 00:36:27.019 "bdev_auto_examine": true, 00:36:27.019 "iobuf_small_cache_size": 128, 00:36:27.019 "iobuf_large_cache_size": 16 00:36:27.019 } 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "method": "bdev_raid_set_options", 00:36:27.019 "params": { 00:36:27.019 "process_window_size_kb": 1024, 00:36:27.019 "process_max_bandwidth_mb_sec": 0 00:36:27.019 } 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "method": "bdev_iscsi_set_options", 00:36:27.019 "params": { 00:36:27.019 "timeout_sec": 30 00:36:27.019 } 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "method": "bdev_nvme_set_options", 00:36:27.019 "params": { 00:36:27.019 "action_on_timeout": "none", 00:36:27.019 "timeout_us": 0, 00:36:27.019 "timeout_admin_us": 0, 00:36:27.019 "keep_alive_timeout_ms": 10000, 00:36:27.019 "arbitration_burst": 0, 00:36:27.019 "low_priority_weight": 0, 00:36:27.019 "medium_priority_weight": 0, 00:36:27.019 "high_priority_weight": 0, 00:36:27.019 "nvme_adminq_poll_period_us": 10000, 00:36:27.019 "nvme_ioq_poll_period_us": 0, 00:36:27.019 "io_queue_requests": 512, 00:36:27.019 "delay_cmd_submit": true, 00:36:27.019 "transport_retry_count": 4, 00:36:27.019 "bdev_retry_count": 3, 00:36:27.019 "transport_ack_timeout": 0, 00:36:27.019 "ctrlr_loss_timeout_sec": 0, 00:36:27.019 "reconnect_delay_sec": 0, 00:36:27.019 "fast_io_fail_timeout_sec": 0, 00:36:27.019 "disable_auto_failback": false, 00:36:27.019 "generate_uuids": false, 00:36:27.019 "transport_tos": 0, 00:36:27.019 "nvme_error_stat": false, 00:36:27.019 "rdma_srq_size": 0, 00:36:27.019 "io_path_stat": false, 00:36:27.019 "allow_accel_sequence": false, 00:36:27.019 "rdma_max_cq_size": 0, 00:36:27.019 "rdma_cm_event_timeout_ms": 0, 00:36:27.019 "dhchap_digests": [ 00:36:27.019 "sha256", 00:36:27.019 "sha384", 00:36:27.019 "sha512" 00:36:27.019 ], 00:36:27.019 "dhchap_dhgroups": [ 00:36:27.019 "null", 00:36:27.019 "ffdhe2048", 00:36:27.019 "ffdhe3072", 00:36:27.019 "ffdhe4096", 00:36:27.019 "ffdhe6144", 00:36:27.019 "ffdhe8192" 00:36:27.019 ] 00:36:27.019 } 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "method": "bdev_nvme_attach_controller", 00:36:27.019 "params": { 00:36:27.019 "name": "nvme0", 00:36:27.019 "trtype": "TCP", 00:36:27.019 "adrfam": "IPv4", 00:36:27.019 "traddr": "127.0.0.1", 00:36:27.019 "trsvcid": "4420", 00:36:27.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.019 "prchk_reftag": false, 00:36:27.019 "prchk_guard": false, 00:36:27.019 "ctrlr_loss_timeout_sec": 0, 00:36:27.019 "reconnect_delay_sec": 0, 00:36:27.019 "fast_io_fail_timeout_sec": 0, 00:36:27.019 "psk": "key0", 00:36:27.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.019 "hdgst": false, 00:36:27.019 "ddgst": false, 00:36:27.019 "multipath": "multipath" 00:36:27.019 } 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "method": "bdev_nvme_set_hotplug", 00:36:27.019 "params": { 00:36:27.019 "period_us": 100000, 00:36:27.019 "enable": false 00:36:27.019 } 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "method": "bdev_wait_for_examine" 00:36:27.019 } 00:36:27.019 ] 00:36:27.019 }, 00:36:27.019 { 00:36:27.019 "subsystem": "nbd", 00:36:27.019 "config": [] 00:36:27.019 } 00:36:27.019 ] 00:36:27.019 }' 00:36:27.019 17:01:31 keyring_file -- keyring/file.sh@115 -- # killprocess 819825 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 819825 ']' 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@954 -- # kill -0 819825 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 819825 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 819825' 00:36:27.019 killing process with pid 819825 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@969 -- # kill 819825 00:36:27.019 Received shutdown signal, test time was about 1.000000 seconds 00:36:27.019 00:36:27.019 Latency(us) 00:36:27.019 [2024-10-14T15:01:31.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.019 [2024-10-14T15:01:31.653Z] =================================================================================================================== 00:36:27.019 [2024-10-14T15:01:31.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.019 17:01:31 keyring_file -- common/autotest_common.sh@974 -- # wait 819825 00:36:27.279 17:01:31 keyring_file -- keyring/file.sh@118 -- # bperfpid=821343 00:36:27.279 17:01:31 keyring_file -- keyring/file.sh@120 -- # waitforlisten 821343 /var/tmp/bperf.sock 00:36:27.279 17:01:31 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 821343 ']' 00:36:27.279 17:01:31 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:27.279 17:01:31 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:27.279 17:01:31 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:27.279 17:01:31 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:27.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:27.279 17:01:31 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:27.279 "subsystems": [ 00:36:27.279 { 00:36:27.279 "subsystem": "keyring", 00:36:27.279 "config": [ 00:36:27.279 { 00:36:27.279 "method": "keyring_file_add_key", 00:36:27.279 "params": { 00:36:27.279 "name": "key0", 00:36:27.279 "path": "/tmp/tmp.JaIx4VaIJa" 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "keyring_file_add_key", 00:36:27.279 "params": { 00:36:27.279 "name": "key1", 00:36:27.279 "path": "/tmp/tmp.ijfQqJVaJ6" 00:36:27.279 } 00:36:27.279 } 00:36:27.279 ] 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "subsystem": "iobuf", 00:36:27.279 "config": [ 00:36:27.279 { 00:36:27.279 "method": "iobuf_set_options", 00:36:27.279 "params": { 00:36:27.279 "small_pool_count": 8192, 00:36:27.279 "large_pool_count": 1024, 00:36:27.279 "small_bufsize": 8192, 00:36:27.279 "large_bufsize": 135168 00:36:27.279 } 00:36:27.279 } 00:36:27.279 ] 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "subsystem": "sock", 00:36:27.279 "config": [ 00:36:27.279 { 00:36:27.279 "method": "sock_set_default_impl", 00:36:27.279 "params": { 00:36:27.279 "impl_name": "posix" 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "sock_impl_set_options", 00:36:27.279 "params": { 00:36:27.279 "impl_name": "ssl", 00:36:27.279 "recv_buf_size": 4096, 00:36:27.279 "send_buf_size": 4096, 00:36:27.279 "enable_recv_pipe": true, 00:36:27.279 "enable_quickack": false, 00:36:27.279 "enable_placement_id": 0, 00:36:27.279 "enable_zerocopy_send_server": true, 00:36:27.279 "enable_zerocopy_send_client": false, 00:36:27.279 "zerocopy_threshold": 0, 00:36:27.279 "tls_version": 0, 00:36:27.279 "enable_ktls": false 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "sock_impl_set_options", 00:36:27.279 "params": { 00:36:27.279 "impl_name": "posix", 00:36:27.279 "recv_buf_size": 2097152, 00:36:27.279 "send_buf_size": 2097152, 00:36:27.279 "enable_recv_pipe": true, 00:36:27.279 "enable_quickack": false, 00:36:27.279 "enable_placement_id": 0, 00:36:27.279 "enable_zerocopy_send_server": true, 00:36:27.279 "enable_zerocopy_send_client": false, 00:36:27.279 "zerocopy_threshold": 0, 00:36:27.279 "tls_version": 0, 00:36:27.279 "enable_ktls": false 00:36:27.279 } 00:36:27.279 } 00:36:27.279 ] 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "subsystem": "vmd", 00:36:27.279 "config": [] 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "subsystem": "accel", 00:36:27.279 "config": [ 00:36:27.279 { 00:36:27.279 "method": "accel_set_options", 00:36:27.279 "params": { 00:36:27.279 "small_cache_size": 128, 00:36:27.279 "large_cache_size": 16, 00:36:27.279 "task_count": 2048, 00:36:27.279 "sequence_count": 2048, 00:36:27.279 "buf_count": 2048 00:36:27.279 } 00:36:27.279 } 00:36:27.279 ] 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "subsystem": "bdev", 00:36:27.279 "config": [ 00:36:27.279 { 00:36:27.279 "method": "bdev_set_options", 00:36:27.279 "params": { 00:36:27.279 "bdev_io_pool_size": 65535, 00:36:27.279 "bdev_io_cache_size": 256, 00:36:27.279 "bdev_auto_examine": true, 00:36:27.279 "iobuf_small_cache_size": 128, 00:36:27.279 "iobuf_large_cache_size": 16 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "bdev_raid_set_options", 00:36:27.279 "params": { 00:36:27.279 "process_window_size_kb": 1024, 00:36:27.279 "process_max_bandwidth_mb_sec": 0 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "bdev_iscsi_set_options", 00:36:27.279 "params": { 00:36:27.279 "timeout_sec": 30 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "bdev_nvme_set_options", 00:36:27.279 "params": { 00:36:27.279 "action_on_timeout": "none", 00:36:27.279 "timeout_us": 0, 00:36:27.279 "timeout_admin_us": 0, 00:36:27.279 "keep_alive_timeout_ms": 10000, 00:36:27.279 "arbitration_burst": 0, 00:36:27.279 "low_priority_weight": 0, 00:36:27.279 "medium_priority_weight": 0, 00:36:27.279 "high_priority_weight": 0, 00:36:27.279 "nvme_adminq_poll_period_us": 10000, 00:36:27.279 "nvme_ioq_poll_period_us": 0, 00:36:27.279 "io_queue_requests": 512, 00:36:27.279 "delay_cmd_submit": true, 00:36:27.279 "transport_retry_count": 4, 00:36:27.279 "bdev_retry_count": 3, 00:36:27.279 "transport_ack_timeout": 0, 00:36:27.279 "ctrlr_loss_timeout_sec": 0, 00:36:27.279 "reconnect_delay_sec": 0, 00:36:27.279 "fast_io_fail_timeout_sec": 0, 00:36:27.279 "disable_auto_failback": false, 00:36:27.279 "generate_uuids": false, 00:36:27.279 "transport_tos": 0, 00:36:27.279 "nvme_error_stat": false, 00:36:27.279 "rdma_srq_size": 0, 00:36:27.279 "io_path_stat": false, 00:36:27.279 "allow_accel_sequence": false, 00:36:27.279 "rdma_max_cq_size": 0, 00:36:27.279 "rdma_cm_event_timeout_ms": 0, 00:36:27.279 "dhchap_digests": [ 00:36:27.279 "sha256", 00:36:27.279 "sha384", 00:36:27.279 "sha512" 00:36:27.279 ], 00:36:27.279 "dhchap_dhgroups": [ 00:36:27.279 "null", 00:36:27.279 "ffdhe2048", 00:36:27.279 "ffdhe3072", 00:36:27.279 "ffdhe4096", 00:36:27.279 "ffdhe6144", 00:36:27.279 "ffdhe8192" 00:36:27.279 ] 00:36:27.279 } 00:36:27.279 }, 00:36:27.279 { 00:36:27.279 "method": "bdev_nvme_attach_controller", 00:36:27.279 "params": { 00:36:27.279 "name": "nvme0", 00:36:27.279 "trtype": "TCP", 00:36:27.279 "adrfam": "IPv4", 00:36:27.279 "traddr": "127.0.0.1", 00:36:27.279 "trsvcid": "4420", 00:36:27.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.279 "prchk_reftag": false, 00:36:27.279 "prchk_guard": false, 00:36:27.279 "ctrlr_loss_timeout_sec": 0, 00:36:27.279 "reconnect_delay_sec": 0, 00:36:27.279 "fast_io_fail_timeout_sec": 0, 00:36:27.279 "psk": "key0", 00:36:27.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.279 "hdgst": false, 00:36:27.279 "ddgst": false, 00:36:27.279 "multipath": "multipath" 00:36:27.280 } 00:36:27.280 }, 00:36:27.280 { 00:36:27.280 "method": "bdev_nvme_set_hotplug", 00:36:27.280 "params": { 00:36:27.280 "period_us": 100000, 00:36:27.280 "enable": false 00:36:27.280 } 00:36:27.280 }, 00:36:27.280 { 00:36:27.280 "method": "bdev_wait_for_examine" 00:36:27.280 } 00:36:27.280 ] 00:36:27.280 }, 00:36:27.280 { 00:36:27.280 "subsystem": "nbd", 00:36:27.280 "config": [] 00:36:27.280 } 00:36:27.280 ] 00:36:27.280 }' 00:36:27.280 17:01:31 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:27.280 17:01:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:27.280 [2024-10-14 17:01:31.757878] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:36:27.280 [2024-10-14 17:01:31.757930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821343 ] 00:36:27.280 [2024-10-14 17:01:31.826850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.280 [2024-10-14 17:01:31.867055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.539 [2024-10-14 17:01:32.026634] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:28.106 17:01:32 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:28.106 17:01:32 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:28.106 17:01:32 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:28.106 17:01:32 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:28.106 17:01:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.365 17:01:32 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:28.365 17:01:32 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.365 17:01:32 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:28.365 17:01:32 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:28.365 17:01:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.624 17:01:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:28.624 17:01:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:28.624 17:01:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:28.624 17:01:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:28.883 17:01:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:28.883 17:01:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:28.883 17:01:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.JaIx4VaIJa /tmp/tmp.ijfQqJVaJ6 00:36:28.883 17:01:33 keyring_file -- keyring/file.sh@20 -- # killprocess 821343 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 821343 ']' 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 821343 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821343 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821343' 00:36:28.883 killing process with pid 821343 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@969 -- # kill 821343 00:36:28.883 Received shutdown signal, test time was about 1.000000 seconds 00:36:28.883 00:36:28.883 Latency(us) 00:36:28.883 [2024-10-14T15:01:33.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.883 [2024-10-14T15:01:33.517Z] =================================================================================================================== 00:36:28.883 [2024-10-14T15:01:33.517Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:28.883 17:01:33 keyring_file -- common/autotest_common.sh@974 -- # wait 821343 00:36:29.142 17:01:33 keyring_file -- keyring/file.sh@21 -- # killprocess 819817 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 819817 ']' 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@954 -- # kill -0 819817 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 819817 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 819817' 00:36:29.142 killing process with pid 819817 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@969 -- # kill 819817 00:36:29.142 17:01:33 keyring_file -- common/autotest_common.sh@974 -- # wait 819817 00:36:29.402 00:36:29.402 real 0m11.697s 00:36:29.402 user 0m29.124s 00:36:29.402 sys 0m2.669s 00:36:29.402 17:01:33 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:29.402 17:01:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:29.402 ************************************ 00:36:29.402 END TEST keyring_file 00:36:29.402 ************************************ 00:36:29.402 17:01:33 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:36:29.402 17:01:33 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:29.402 17:01:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:29.402 17:01:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:29.402 17:01:33 -- common/autotest_common.sh@10 -- # set +x 00:36:29.402 ************************************ 00:36:29.402 START TEST keyring_linux 00:36:29.402 ************************************ 00:36:29.402 17:01:33 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:29.402 Joined session keyring: 595385847 00:36:29.662 * Looking for test storage... 00:36:29.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.662 --rc genhtml_branch_coverage=1 00:36:29.662 --rc genhtml_function_coverage=1 00:36:29.662 --rc genhtml_legend=1 00:36:29.662 --rc geninfo_all_blocks=1 00:36:29.662 --rc geninfo_unexecuted_blocks=1 00:36:29.662 00:36:29.662 ' 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.662 --rc genhtml_branch_coverage=1 00:36:29.662 --rc genhtml_function_coverage=1 00:36:29.662 --rc genhtml_legend=1 00:36:29.662 --rc geninfo_all_blocks=1 00:36:29.662 --rc geninfo_unexecuted_blocks=1 00:36:29.662 00:36:29.662 ' 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.662 --rc genhtml_branch_coverage=1 00:36:29.662 --rc genhtml_function_coverage=1 00:36:29.662 --rc genhtml_legend=1 00:36:29.662 --rc geninfo_all_blocks=1 00:36:29.662 --rc geninfo_unexecuted_blocks=1 00:36:29.662 00:36:29.662 ' 00:36:29.662 17:01:34 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:29.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:29.662 --rc genhtml_branch_coverage=1 00:36:29.662 --rc genhtml_function_coverage=1 00:36:29.662 --rc genhtml_legend=1 00:36:29.662 --rc geninfo_all_blocks=1 00:36:29.662 --rc geninfo_unexecuted_blocks=1 00:36:29.662 00:36:29.662 ' 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:29.662 17:01:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:29.662 17:01:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.662 17:01:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.662 17:01:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.662 17:01:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:29.662 17:01:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:29.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:29.662 17:01:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:29.662 17:01:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:36:29.662 17:01:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:29.663 /tmp/:spdk-test:key0 00:36:29.663 17:01:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:29.663 17:01:34 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:29.663 17:01:34 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:36:29.663 17:01:34 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:29.663 17:01:34 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:36:29.663 17:01:34 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:36:29.663 17:01:34 keyring_linux -- nvmf/common.sh@731 -- # python - 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:29.663 17:01:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:29.663 /tmp/:spdk-test:key1 00:36:29.663 17:01:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=821892 00:36:29.663 17:01:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:29.663 17:01:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 821892 00:36:29.663 17:01:34 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 821892 ']' 00:36:29.663 17:01:34 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.663 17:01:34 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:29.663 17:01:34 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.663 17:01:34 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:29.663 17:01:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:29.922 [2024-10-14 17:01:34.298413] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:36:29.922 [2024-10-14 17:01:34.298461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821892 ] 00:36:29.922 [2024-10-14 17:01:34.366971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.922 [2024-10-14 17:01:34.408723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:30.181 17:01:34 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.181 [2024-10-14 17:01:34.615701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.181 null0 00:36:30.181 [2024-10-14 17:01:34.647741] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:30.181 [2024-10-14 17:01:34.648074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.181 17:01:34 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:30.181 285900534 00:36:30.181 17:01:34 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:30.181 443055153 00:36:30.181 17:01:34 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=821901 00:36:30.181 17:01:34 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 821901 /var/tmp/bperf.sock 00:36:30.181 17:01:34 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 821901 ']' 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:30.181 17:01:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:30.181 [2024-10-14 17:01:34.721762] Starting SPDK v25.01-pre git sha1 d6f411c3e / DPDK 24.03.0 initialization... 00:36:30.181 [2024-10-14 17:01:34.721802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821901 ] 00:36:30.181 [2024-10-14 17:01:34.789667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.440 [2024-10-14 17:01:34.830757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.440 17:01:34 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:30.440 17:01:34 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:30.440 17:01:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:30.440 17:01:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:30.440 17:01:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:30.440 17:01:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:30.699 17:01:35 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:30.699 17:01:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:30.958 [2024-10-14 17:01:35.474622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:30.958 nvme0n1 00:36:30.958 17:01:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:30.958 17:01:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:30.958 17:01:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:30.958 17:01:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:30.958 17:01:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:30.958 17:01:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.217 17:01:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:31.217 17:01:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:31.217 17:01:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:31.217 17:01:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:31.217 17:01:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.217 17:01:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:31.217 17:01:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@25 -- # sn=285900534 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 285900534 == \2\8\5\9\0\0\5\3\4 ]] 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 285900534 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:31.476 17:01:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:31.476 Running I/O for 1 seconds... 00:36:32.854 21814.00 IOPS, 85.21 MiB/s 00:36:32.854 Latency(us) 00:36:32.854 [2024-10-14T15:01:37.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.854 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:32.854 nvme0n1 : 1.01 21814.62 85.21 0.00 0.00 5849.04 4962.01 12358.22 00:36:32.854 [2024-10-14T15:01:37.488Z] =================================================================================================================== 00:36:32.854 [2024-10-14T15:01:37.488Z] Total : 21814.62 85.21 0.00 0.00 5849.04 4962.01 12358.22 00:36:32.854 { 00:36:32.854 "results": [ 00:36:32.854 { 00:36:32.854 "job": "nvme0n1", 00:36:32.854 "core_mask": "0x2", 00:36:32.854 "workload": "randread", 00:36:32.854 "status": "finished", 00:36:32.854 "queue_depth": 128, 00:36:32.854 "io_size": 4096, 00:36:32.854 "runtime": 1.005839, 00:36:32.854 "iops": 21814.624408081214, 00:36:32.854 "mibps": 85.21337659406724, 00:36:32.854 "io_failed": 0, 00:36:32.854 "io_timeout": 0, 00:36:32.854 "avg_latency_us": 5849.041989313819, 00:36:32.854 "min_latency_us": 4962.011428571429, 00:36:32.854 "max_latency_us": 12358.217142857144 00:36:32.854 } 00:36:32.854 ], 00:36:32.854 "core_count": 1 00:36:32.854 } 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:32.854 17:01:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:32.854 17:01:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:32.854 17:01:37 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:32.854 17:01:37 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:32.854 17:01:37 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:32.854 17:01:37 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:33.113 17:01:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.113 17:01:37 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:33.113 17:01:37 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:33.113 17:01:37 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.113 17:01:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:33.113 [2024-10-14 17:01:37.656131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:33.113 [2024-10-14 17:01:37.656400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x847010 (107): Transport endpoint is not connected 00:36:33.113 [2024-10-14 17:01:37.657396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x847010 (9): Bad file descriptor 00:36:33.113 [2024-10-14 17:01:37.658397] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:33.113 [2024-10-14 17:01:37.658407] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:33.113 [2024-10-14 17:01:37.658414] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:33.113 [2024-10-14 17:01:37.658423] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:33.113 request: 00:36:33.113 { 00:36:33.113 "name": "nvme0", 00:36:33.113 "trtype": "tcp", 00:36:33.113 "traddr": "127.0.0.1", 00:36:33.113 "adrfam": "ipv4", 00:36:33.113 "trsvcid": "4420", 00:36:33.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.113 "prchk_reftag": false, 00:36:33.113 "prchk_guard": false, 00:36:33.113 "hdgst": false, 00:36:33.114 "ddgst": false, 00:36:33.114 "psk": ":spdk-test:key1", 00:36:33.114 "allow_unrecognized_csi": false, 00:36:33.114 "method": "bdev_nvme_attach_controller", 00:36:33.114 "req_id": 1 00:36:33.114 } 00:36:33.114 Got JSON-RPC error response 00:36:33.114 response: 00:36:33.114 { 00:36:33.114 "code": -5, 00:36:33.114 "message": "Input/output error" 00:36:33.114 } 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@33 -- # sn=285900534 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 285900534 00:36:33.114 1 links removed 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@33 -- # sn=443055153 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 443055153 00:36:33.114 1 links removed 00:36:33.114 17:01:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 821901 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 821901 ']' 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 821901 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:33.114 17:01:37 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821901 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821901' 00:36:33.373 killing process with pid 821901 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@969 -- # kill 821901 00:36:33.373 Received shutdown signal, test time was about 1.000000 seconds 00:36:33.373 00:36:33.373 Latency(us) 00:36:33.373 [2024-10-14T15:01:38.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.373 [2024-10-14T15:01:38.007Z] =================================================================================================================== 00:36:33.373 [2024-10-14T15:01:38.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@974 -- # wait 821901 00:36:33.373 17:01:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 821892 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 821892 ']' 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 821892 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821892 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821892' 00:36:33.373 killing process with pid 821892 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@969 -- # kill 821892 00:36:33.373 17:01:37 keyring_linux -- common/autotest_common.sh@974 -- # wait 821892 00:36:33.632 00:36:33.632 real 0m4.287s 00:36:33.632 user 0m8.165s 00:36:33.632 sys 0m1.392s 00:36:33.632 17:01:38 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:33.632 17:01:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:33.632 ************************************ 00:36:33.632 END TEST keyring_linux 00:36:33.632 ************************************ 00:36:33.891 17:01:38 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:33.891 17:01:38 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:33.891 17:01:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:33.891 17:01:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:33.891 17:01:38 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:33.891 17:01:38 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:33.891 17:01:38 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:33.891 17:01:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:33.891 17:01:38 -- common/autotest_common.sh@10 -- # set +x 00:36:33.891 17:01:38 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:33.891 17:01:38 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:33.891 17:01:38 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:33.891 17:01:38 -- common/autotest_common.sh@10 -- # set +x 00:36:39.165 INFO: APP EXITING 00:36:39.165 INFO: killing all VMs 00:36:39.165 INFO: killing vhost app 00:36:39.165 INFO: EXIT DONE 00:36:41.702 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:41.702 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:41.702 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:44.993 Cleaning 00:36:44.993 Removing: /var/run/dpdk/spdk0/config 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:44.993 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:44.993 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:44.993 Removing: /var/run/dpdk/spdk1/config 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:44.993 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:44.993 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:44.993 Removing: /var/run/dpdk/spdk2/config 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:44.993 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:44.993 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:44.993 Removing: /var/run/dpdk/spdk3/config 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:44.993 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:44.993 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:44.993 Removing: /var/run/dpdk/spdk4/config 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:44.993 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:44.993 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:44.993 Removing: /dev/shm/bdev_svc_trace.1 00:36:44.993 Removing: /dev/shm/nvmf_trace.0 00:36:44.993 Removing: /dev/shm/spdk_tgt_trace.pid350289 00:36:44.993 Removing: /var/run/dpdk/spdk0 00:36:44.993 Removing: /var/run/dpdk/spdk1 00:36:44.993 Removing: /var/run/dpdk/spdk2 00:36:44.993 Removing: /var/run/dpdk/spdk3 00:36:44.993 Removing: /var/run/dpdk/spdk4 00:36:44.993 Removing: /var/run/dpdk/spdk_pid347525 00:36:44.993 Removing: /var/run/dpdk/spdk_pid348985 00:36:44.993 Removing: /var/run/dpdk/spdk_pid350289 00:36:44.993 Removing: /var/run/dpdk/spdk_pid350739 00:36:44.993 Removing: /var/run/dpdk/spdk_pid351661 00:36:44.993 Removing: /var/run/dpdk/spdk_pid351888 00:36:44.993 Removing: /var/run/dpdk/spdk_pid352859 00:36:44.993 Removing: /var/run/dpdk/spdk_pid352884 00:36:44.993 Removing: /var/run/dpdk/spdk_pid353228 00:36:44.993 Removing: /var/run/dpdk/spdk_pid354964 00:36:44.993 Removing: /var/run/dpdk/spdk_pid356240 00:36:44.993 Removing: /var/run/dpdk/spdk_pid356526 00:36:44.993 Removing: /var/run/dpdk/spdk_pid356815 00:36:44.993 Removing: /var/run/dpdk/spdk_pid357123 00:36:44.993 Removing: /var/run/dpdk/spdk_pid357415 00:36:44.993 Removing: /var/run/dpdk/spdk_pid357581 00:36:44.993 Removing: /var/run/dpdk/spdk_pid357744 00:36:44.993 Removing: /var/run/dpdk/spdk_pid358050 00:36:44.993 Removing: /var/run/dpdk/spdk_pid358745 00:36:44.993 Removing: /var/run/dpdk/spdk_pid361858 00:36:44.993 Removing: /var/run/dpdk/spdk_pid361995 00:36:44.993 Removing: /var/run/dpdk/spdk_pid362249 00:36:44.993 Removing: /var/run/dpdk/spdk_pid362367 00:36:44.993 Removing: /var/run/dpdk/spdk_pid362746 00:36:44.993 Removing: /var/run/dpdk/spdk_pid362901 00:36:44.993 Removing: /var/run/dpdk/spdk_pid363276 00:36:44.993 Removing: /var/run/dpdk/spdk_pid363465 00:36:44.993 Removing: /var/run/dpdk/spdk_pid363726 00:36:44.993 Removing: /var/run/dpdk/spdk_pid363742 00:36:44.993 Removing: /var/run/dpdk/spdk_pid363998 00:36:44.993 Removing: /var/run/dpdk/spdk_pid364008 00:36:44.993 Removing: /var/run/dpdk/spdk_pid364571 00:36:44.993 Removing: /var/run/dpdk/spdk_pid364820 00:36:44.993 Removing: /var/run/dpdk/spdk_pid365113 00:36:44.993 Removing: /var/run/dpdk/spdk_pid368827 00:36:44.993 Removing: /var/run/dpdk/spdk_pid373089 00:36:44.993 Removing: /var/run/dpdk/spdk_pid383157 00:36:44.993 Removing: /var/run/dpdk/spdk_pid383822 00:36:44.993 Removing: /var/run/dpdk/spdk_pid388270 00:36:44.993 Removing: /var/run/dpdk/spdk_pid388698 00:36:44.993 Removing: /var/run/dpdk/spdk_pid393356 00:36:44.993 Removing: /var/run/dpdk/spdk_pid399237 00:36:44.993 Removing: /var/run/dpdk/spdk_pid401838 00:36:44.994 Removing: /var/run/dpdk/spdk_pid412070 00:36:44.994 Removing: /var/run/dpdk/spdk_pid420994 00:36:44.994 Removing: /var/run/dpdk/spdk_pid422830 00:36:44.994 Removing: /var/run/dpdk/spdk_pid423757 00:36:44.994 Removing: /var/run/dpdk/spdk_pid440835 00:36:44.994 Removing: /var/run/dpdk/spdk_pid444989 00:36:44.994 Removing: /var/run/dpdk/spdk_pid489435 00:36:44.994 Removing: /var/run/dpdk/spdk_pid495206 00:36:44.994 Removing: /var/run/dpdk/spdk_pid500976 00:36:44.994 Removing: /var/run/dpdk/spdk_pid506955 00:36:44.994 Removing: /var/run/dpdk/spdk_pid506992 00:36:44.994 Removing: /var/run/dpdk/spdk_pid507746 00:36:44.994 Removing: /var/run/dpdk/spdk_pid508611 00:36:44.994 Removing: /var/run/dpdk/spdk_pid509525 00:36:44.994 Removing: /var/run/dpdk/spdk_pid510012 00:36:44.994 Removing: /var/run/dpdk/spdk_pid510194 00:36:44.994 Removing: /var/run/dpdk/spdk_pid510441 00:36:44.994 Removing: /var/run/dpdk/spdk_pid510456 00:36:44.994 Removing: /var/run/dpdk/spdk_pid510459 00:36:44.994 Removing: /var/run/dpdk/spdk_pid511374 00:36:44.994 Removing: /var/run/dpdk/spdk_pid512286 00:36:44.994 Removing: /var/run/dpdk/spdk_pid513205 00:36:44.994 Removing: /var/run/dpdk/spdk_pid513681 00:36:44.994 Removing: /var/run/dpdk/spdk_pid513683 00:36:44.994 Removing: /var/run/dpdk/spdk_pid513983 00:36:44.994 Removing: /var/run/dpdk/spdk_pid515142 00:36:44.994 Removing: /var/run/dpdk/spdk_pid516131 00:36:44.994 Removing: /var/run/dpdk/spdk_pid524221 00:36:44.994 Removing: /var/run/dpdk/spdk_pid553104 00:36:44.994 Removing: /var/run/dpdk/spdk_pid557627 00:36:44.994 Removing: /var/run/dpdk/spdk_pid559230 00:36:44.994 Removing: /var/run/dpdk/spdk_pid561066 00:36:44.994 Removing: /var/run/dpdk/spdk_pid561088 00:36:44.994 Removing: /var/run/dpdk/spdk_pid561320 00:36:44.994 Removing: /var/run/dpdk/spdk_pid561436 00:36:44.994 Removing: /var/run/dpdk/spdk_pid561839 00:36:44.994 Removing: /var/run/dpdk/spdk_pid563674 00:36:44.994 Removing: /var/run/dpdk/spdk_pid564646 00:36:44.994 Removing: /var/run/dpdk/spdk_pid565059 00:36:44.994 Removing: /var/run/dpdk/spdk_pid567679 00:36:44.994 Removing: /var/run/dpdk/spdk_pid568043 00:36:45.253 Removing: /var/run/dpdk/spdk_pid568763 00:36:45.253 Removing: /var/run/dpdk/spdk_pid572914 00:36:45.253 Removing: /var/run/dpdk/spdk_pid578278 00:36:45.253 Removing: /var/run/dpdk/spdk_pid578280 00:36:45.253 Removing: /var/run/dpdk/spdk_pid578282 00:36:45.253 Removing: /var/run/dpdk/spdk_pid582173 00:36:45.253 Removing: /var/run/dpdk/spdk_pid590530 00:36:45.253 Removing: /var/run/dpdk/spdk_pid594354 00:36:45.253 Removing: /var/run/dpdk/spdk_pid600574 00:36:45.253 Removing: /var/run/dpdk/spdk_pid601680 00:36:45.253 Removing: /var/run/dpdk/spdk_pid603210 00:36:45.253 Removing: /var/run/dpdk/spdk_pid604750 00:36:45.253 Removing: /var/run/dpdk/spdk_pid609413 00:36:45.253 Removing: /var/run/dpdk/spdk_pid613661 00:36:45.253 Removing: /var/run/dpdk/spdk_pid621420 00:36:45.253 Removing: /var/run/dpdk/spdk_pid621422 00:36:45.253 Removing: /var/run/dpdk/spdk_pid626151 00:36:45.253 Removing: /var/run/dpdk/spdk_pid626386 00:36:45.253 Removing: /var/run/dpdk/spdk_pid626527 00:36:45.253 Removing: /var/run/dpdk/spdk_pid626856 00:36:45.253 Removing: /var/run/dpdk/spdk_pid627066 00:36:45.253 Removing: /var/run/dpdk/spdk_pid631563 00:36:45.253 Removing: /var/run/dpdk/spdk_pid632136 00:36:45.253 Removing: /var/run/dpdk/spdk_pid636490 00:36:45.253 Removing: /var/run/dpdk/spdk_pid639154 00:36:45.253 Removing: /var/run/dpdk/spdk_pid644425 00:36:45.253 Removing: /var/run/dpdk/spdk_pid649747 00:36:45.253 Removing: /var/run/dpdk/spdk_pid658524 00:36:45.253 Removing: /var/run/dpdk/spdk_pid666210 00:36:45.253 Removing: /var/run/dpdk/spdk_pid666253 00:36:45.253 Removing: /var/run/dpdk/spdk_pid684822 00:36:45.253 Removing: /var/run/dpdk/spdk_pid685293 00:36:45.253 Removing: /var/run/dpdk/spdk_pid685921 00:36:45.253 Removing: /var/run/dpdk/spdk_pid686462 00:36:45.253 Removing: /var/run/dpdk/spdk_pid687194 00:36:45.253 Removing: /var/run/dpdk/spdk_pid687674 00:36:45.253 Removing: /var/run/dpdk/spdk_pid688145 00:36:45.253 Removing: /var/run/dpdk/spdk_pid688833 00:36:45.253 Removing: /var/run/dpdk/spdk_pid693096 00:36:45.253 Removing: /var/run/dpdk/spdk_pid693336 00:36:45.253 Removing: /var/run/dpdk/spdk_pid699263 00:36:45.253 Removing: /var/run/dpdk/spdk_pid699449 00:36:45.253 Removing: /var/run/dpdk/spdk_pid704764 00:36:45.253 Removing: /var/run/dpdk/spdk_pid709031 00:36:45.253 Removing: /var/run/dpdk/spdk_pid719200 00:36:45.253 Removing: /var/run/dpdk/spdk_pid719883 00:36:45.253 Removing: /var/run/dpdk/spdk_pid724149 00:36:45.253 Removing: /var/run/dpdk/spdk_pid724400 00:36:45.253 Removing: /var/run/dpdk/spdk_pid728602 00:36:45.253 Removing: /var/run/dpdk/spdk_pid734275 00:36:45.253 Removing: /var/run/dpdk/spdk_pid736857 00:36:45.253 Removing: /var/run/dpdk/spdk_pid746795 00:36:45.253 Removing: /var/run/dpdk/spdk_pid755465 00:36:45.253 Removing: /var/run/dpdk/spdk_pid757545 00:36:45.253 Removing: /var/run/dpdk/spdk_pid758501 00:36:45.254 Removing: /var/run/dpdk/spdk_pid774620 00:36:45.254 Removing: /var/run/dpdk/spdk_pid778435 00:36:45.254 Removing: /var/run/dpdk/spdk_pid781131 00:36:45.254 Removing: /var/run/dpdk/spdk_pid789097 00:36:45.254 Removing: /var/run/dpdk/spdk_pid789104 00:36:45.254 Removing: /var/run/dpdk/spdk_pid794300 00:36:45.254 Removing: /var/run/dpdk/spdk_pid796131 00:36:45.254 Removing: /var/run/dpdk/spdk_pid798094 00:36:45.254 Removing: /var/run/dpdk/spdk_pid799269 00:36:45.513 Removing: /var/run/dpdk/spdk_pid801495 00:36:45.513 Removing: /var/run/dpdk/spdk_pid802907 00:36:45.513 Removing: /var/run/dpdk/spdk_pid811645 00:36:45.513 Removing: /var/run/dpdk/spdk_pid812107 00:36:45.513 Removing: /var/run/dpdk/spdk_pid812566 00:36:45.513 Removing: /var/run/dpdk/spdk_pid815051 00:36:45.513 Removing: /var/run/dpdk/spdk_pid815518 00:36:45.513 Removing: /var/run/dpdk/spdk_pid815981 00:36:45.513 Removing: /var/run/dpdk/spdk_pid819817 00:36:45.513 Removing: /var/run/dpdk/spdk_pid819825 00:36:45.513 Removing: /var/run/dpdk/spdk_pid821343 00:36:45.513 Removing: /var/run/dpdk/spdk_pid821892 00:36:45.513 Removing: /var/run/dpdk/spdk_pid821901 00:36:45.513 Clean 00:36:45.513 17:01:50 -- common/autotest_common.sh@1451 -- # return 0 00:36:45.513 17:01:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:45.513 17:01:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:45.513 17:01:50 -- common/autotest_common.sh@10 -- # set +x 00:36:45.513 17:01:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:45.513 17:01:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:45.513 17:01:50 -- common/autotest_common.sh@10 -- # set +x 00:36:45.513 17:01:50 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:45.513 17:01:50 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:45.513 17:01:50 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:45.513 17:01:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:45.513 17:01:50 -- spdk/autotest.sh@394 -- # hostname 00:36:45.513 17:01:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:45.771 geninfo: WARNING: invalid characters removed from testname! 00:37:07.708 17:02:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:08.644 17:02:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.548 17:02:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:12.452 17:02:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:14.356 17:02:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:16.260 17:02:20 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:18.165 17:02:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:18.165 17:02:22 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:37:18.165 17:02:22 -- common/autotest_common.sh@1691 -- $ lcov --version 00:37:18.165 17:02:22 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:37:18.165 17:02:22 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:37:18.165 17:02:22 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:37:18.165 17:02:22 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:37:18.165 17:02:22 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:37:18.165 17:02:22 -- scripts/common.sh@336 -- $ IFS=.-: 00:37:18.165 17:02:22 -- scripts/common.sh@336 -- $ read -ra ver1 00:37:18.165 17:02:22 -- scripts/common.sh@337 -- $ IFS=.-: 00:37:18.166 17:02:22 -- scripts/common.sh@337 -- $ read -ra ver2 00:37:18.166 17:02:22 -- scripts/common.sh@338 -- $ local 'op=<' 00:37:18.166 17:02:22 -- scripts/common.sh@340 -- $ ver1_l=2 00:37:18.166 17:02:22 -- scripts/common.sh@341 -- $ ver2_l=1 00:37:18.166 17:02:22 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:37:18.166 17:02:22 -- scripts/common.sh@344 -- $ case "$op" in 00:37:18.166 17:02:22 -- scripts/common.sh@345 -- $ : 1 00:37:18.166 17:02:22 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:37:18.166 17:02:22 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:18.166 17:02:22 -- scripts/common.sh@365 -- $ decimal 1 00:37:18.166 17:02:22 -- scripts/common.sh@353 -- $ local d=1 00:37:18.166 17:02:22 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:37:18.166 17:02:22 -- scripts/common.sh@355 -- $ echo 1 00:37:18.166 17:02:22 -- scripts/common.sh@365 -- $ ver1[v]=1 00:37:18.166 17:02:22 -- scripts/common.sh@366 -- $ decimal 2 00:37:18.166 17:02:22 -- scripts/common.sh@353 -- $ local d=2 00:37:18.166 17:02:22 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:37:18.166 17:02:22 -- scripts/common.sh@355 -- $ echo 2 00:37:18.166 17:02:22 -- scripts/common.sh@366 -- $ ver2[v]=2 00:37:18.166 17:02:22 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:37:18.166 17:02:22 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:37:18.166 17:02:22 -- scripts/common.sh@368 -- $ return 0 00:37:18.166 17:02:22 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:18.166 17:02:22 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:37:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.166 --rc genhtml_branch_coverage=1 00:37:18.166 --rc genhtml_function_coverage=1 00:37:18.166 --rc genhtml_legend=1 00:37:18.166 --rc geninfo_all_blocks=1 00:37:18.166 --rc geninfo_unexecuted_blocks=1 00:37:18.166 00:37:18.166 ' 00:37:18.166 17:02:22 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:37:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.166 --rc genhtml_branch_coverage=1 00:37:18.166 --rc genhtml_function_coverage=1 00:37:18.166 --rc genhtml_legend=1 00:37:18.166 --rc geninfo_all_blocks=1 00:37:18.166 --rc geninfo_unexecuted_blocks=1 00:37:18.166 00:37:18.166 ' 00:37:18.166 17:02:22 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:37:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.166 --rc genhtml_branch_coverage=1 00:37:18.166 --rc genhtml_function_coverage=1 00:37:18.166 --rc genhtml_legend=1 00:37:18.166 --rc geninfo_all_blocks=1 00:37:18.166 --rc geninfo_unexecuted_blocks=1 00:37:18.166 00:37:18.166 ' 00:37:18.166 17:02:22 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:37:18.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:18.166 --rc genhtml_branch_coverage=1 00:37:18.166 --rc genhtml_function_coverage=1 00:37:18.166 --rc genhtml_legend=1 00:37:18.166 --rc geninfo_all_blocks=1 00:37:18.166 --rc geninfo_unexecuted_blocks=1 00:37:18.166 00:37:18.166 ' 00:37:18.166 17:02:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:18.166 17:02:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:37:18.166 17:02:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:18.166 17:02:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:18.166 17:02:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:18.166 17:02:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.166 17:02:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.166 17:02:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.166 17:02:22 -- paths/export.sh@5 -- $ export PATH 00:37:18.166 17:02:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:18.166 17:02:22 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:18.166 17:02:22 -- common/autobuild_common.sh@486 -- $ date +%s 00:37:18.166 17:02:22 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728918142.XXXXXX 00:37:18.166 17:02:22 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728918142.KZFIV3 00:37:18.166 17:02:22 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:37:18.166 17:02:22 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:37:18.166 17:02:22 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:37:18.166 17:02:22 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:18.166 17:02:22 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:18.166 17:02:22 -- common/autobuild_common.sh@502 -- $ get_config_params 00:37:18.166 17:02:22 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:37:18.166 17:02:22 -- common/autotest_common.sh@10 -- $ set +x 00:37:18.166 17:02:22 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:37:18.166 17:02:22 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:37:18.166 17:02:22 -- pm/common@17 -- $ local monitor 00:37:18.166 17:02:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:18.166 17:02:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:18.166 17:02:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:18.166 17:02:22 -- pm/common@21 -- $ date +%s 00:37:18.166 17:02:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:18.166 17:02:22 -- pm/common@21 -- $ date +%s 00:37:18.166 17:02:22 -- pm/common@25 -- $ sleep 1 00:37:18.166 17:02:22 -- pm/common@21 -- $ date +%s 00:37:18.166 17:02:22 -- pm/common@21 -- $ date +%s 00:37:18.166 17:02:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728918142 00:37:18.166 17:02:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728918142 00:37:18.166 17:02:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728918142 00:37:18.166 17:02:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728918142 00:37:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728918142_collect-cpu-load.pm.log 00:37:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728918142_collect-vmstat.pm.log 00:37:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728918142_collect-cpu-temp.pm.log 00:37:18.166 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728918142_collect-bmc-pm.bmc.pm.log 00:37:19.106 17:02:23 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:37:19.106 17:02:23 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:37:19.106 17:02:23 -- spdk/autopackage.sh@14 -- $ timing_finish 00:37:19.106 17:02:23 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:19.106 17:02:23 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:19.106 17:02:23 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:19.106 17:02:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:19.106 17:02:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:19.106 17:02:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:19.106 17:02:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.106 17:02:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:19.106 17:02:23 -- pm/common@44 -- $ pid=832540 00:37:19.106 17:02:23 -- pm/common@50 -- $ kill -TERM 832540 00:37:19.106 17:02:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.106 17:02:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:19.106 17:02:23 -- pm/common@44 -- $ pid=832542 00:37:19.106 17:02:23 -- pm/common@50 -- $ kill -TERM 832542 00:37:19.106 17:02:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.106 17:02:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:19.106 17:02:23 -- pm/common@44 -- $ pid=832543 00:37:19.106 17:02:23 -- pm/common@50 -- $ kill -TERM 832543 00:37:19.106 17:02:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:19.106 17:02:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:19.106 17:02:23 -- pm/common@44 -- $ pid=832568 00:37:19.106 17:02:23 -- pm/common@50 -- $ sudo -E kill -TERM 832568 00:37:19.106 + [[ -n 270874 ]] 00:37:19.106 + sudo kill 270874 00:37:19.116 [Pipeline] } 00:37:19.131 [Pipeline] // stage 00:37:19.137 [Pipeline] } 00:37:19.151 [Pipeline] // timeout 00:37:19.157 [Pipeline] } 00:37:19.172 [Pipeline] // catchError 00:37:19.177 [Pipeline] } 00:37:19.192 [Pipeline] // wrap 00:37:19.198 [Pipeline] } 00:37:19.211 [Pipeline] // catchError 00:37:19.220 [Pipeline] stage 00:37:19.223 [Pipeline] { (Epilogue) 00:37:19.236 [Pipeline] catchError 00:37:19.237 [Pipeline] { 00:37:19.250 [Pipeline] echo 00:37:19.252 Cleanup processes 00:37:19.258 [Pipeline] sh 00:37:19.647 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:19.647 832730 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:19.647 833039 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:19.710 [Pipeline] sh 00:37:20.001 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:20.001 ++ grep -v 'sudo pgrep' 00:37:20.001 ++ awk '{print $1}' 00:37:20.001 + sudo kill -9 832730 00:37:20.013 [Pipeline] sh 00:37:20.298 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:32.518 [Pipeline] sh 00:37:32.802 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:32.802 Artifacts sizes are good 00:37:32.816 [Pipeline] archiveArtifacts 00:37:32.825 Archiving artifacts 00:37:32.944 [Pipeline] sh 00:37:33.230 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:33.245 [Pipeline] cleanWs 00:37:33.255 [WS-CLEANUP] Deleting project workspace... 00:37:33.255 [WS-CLEANUP] Deferred wipeout is used... 00:37:33.262 [WS-CLEANUP] done 00:37:33.264 [Pipeline] } 00:37:33.283 [Pipeline] // catchError 00:37:33.294 [Pipeline] sh 00:37:33.577 + logger -p user.info -t JENKINS-CI 00:37:33.584 [Pipeline] } 00:37:33.597 [Pipeline] // stage 00:37:33.601 [Pipeline] } 00:37:33.614 [Pipeline] // node 00:37:33.618 [Pipeline] End of Pipeline 00:37:33.646 Finished: SUCCESS